Exceptional Creativity in Science and Technology : Individuals, Institutions, and Innovations 9781599474304, 9781599474267

In the evolution of science and technology, laws governing exceptional creativity and innovation have yet to be discover

170 63 1MB

English Pages 273 Year 2013

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Exceptional Creativity in Science and Technology : Individuals, Institutions, and Innovations
 9781599474304, 9781599474267

Citation preview

Exceptional Creativity in Science and Technology

Exceptional Creativity in Science and Technology Individuals, Institutions, and Innovations Edited by Andrew Robinson

Templeton Press

Templeton Press 300 Conshohocken State Road, Suite 500 West Conshohocken, PA 19428 www.templetonpress.org © 2013 by Templeton Press All rights reserved. No part of this book may be used or reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the written permission of Templeton Press. Typeset and designed by Gopa and Ted2, Inc. Library of Congress Cataloging-­in-­Publication Data on file. Printed in the United States of America 13 14 15 16 17 18

10 9 8 7 6 5 4 3 2 1

For Baruch S. Blumberg (1925–2011) Renowned for his contributions to understanding and preventing viral hepatitis, Baruch Blumberg was awarded a Nobel Prize in medicine in 1976 for work that has had a far-­reaching impact on public health around the globe. He was a deeply philosophical and exceptionally imaginative scientist. University Professor of Medicine and Anthropology at the University of Pennsylvania and Distinguished Scientist at Fox Chase Cancer Center as well as president of the American Philosophical Society at his death, he had served as master of Balliol College, Oxford, and as founding director of the NASA Astrobiology Institute. As a biochemist and geneticist, he improved the quality of life on Earth for millions of its inhabitants. As the NAI’s chief, he led the ongoing search to discover how life might have originated and evolved here, evidence of its existence elsewhere in the universe, and hints about its future both on Earth and in the “untrespassed sanctity of space.” As colleague and friend, who chaired the first conversations among many of this book’s contributors at the Institute for Advanced Study, Princeton, in 2008, Barry was an inspiration to us all. We remember him for his brilliance, his tenacity, and his easy grace.

Contents

Introduction 3 Andrew Robinson Chapter 1 The Rise and Decline of Hegemonic Systems of Scientific Creativity J. Rogers Hollingsworth and David M. Gear

25

Chapter 2 Exceptional Creativity in Physics: Two Case Studies—­Niels Bohr’s Copenhagen Institute and Enrico Fermi’s Rome Institute Gino Segrè

53

Chapter 3 Physics at Bell Labs, 1949–1984: Young Turks and Younger Turks Philip W. Anderson

71

Chapter 4 The Usefulness of Useless Knowledge: The Physical Realization of an Electronic Computing Instrument at the Institute for Advanced Study, Princeton, 1930–1958 George Dyson

83

Chapter 5 Education and Exceptional Creativity: The Decoding of DNA and the Decipherment of Linear B Andrew Robinson

99

Chapter 6 The Sources of Modern Engineering Innovation David P. Billington and David P. Billington Jr. vii

123

andrew rob inson

Chapter 7 Technically Creative Environments Susan Hackwood

145

Chapter 8 Entrepreneurial Creativity Timothy F. Bresnahan

163

Chapter 9 Scientific Breakthroughs and Breakthrough Products: Creative Activity as Technology Turns into Applications Tony Hey and Jonathan Hey 191 Chapter 10 A Billion Fresh Pairs of Eyes: The Creation of Self-­Adjustable Eyeglasses Joshua Silver

211

Chapter 11 New Ideas from High Platforms: Multigenerational Creativity at NASA Baruch S. Blumberg

227

Afterword: From Michael Faraday to Steve Jobs Freeman Dyson

241

Contributors 251 Index 255

viii

Exceptional Creativity in Science and Technology

Introduction Andrew Robinson

Fundamental research . . . may be curiosity driven, but may also result in an Aladdin’s cave of new knowledge from which who knows what prizes may come in the future. . . . Anyone casting their eyes over the work going on in Oxford University in the late 1950s, looking for groups doing research with commercial potential, would probably have put the high-­magnetic-­field team in the Clarendon Laboratory at the bottom of the list. That work was the starting point for Oxford Instruments, and the taxes paid and generated by the Company over the years would, in total, be enough to finance, say, a new university or a hospital. The economic ramifications of one successful company can be very wide. —­A udrey Wood, cofounder with engineer Martin Wood of Oxford Instruments, in Magnetic Venture: The Story of Oxford Instruments (2001) 1

In the evolution of science and technology, laws governing exceptional creativity and innovation have yet to be discovered. Writing in his influential study, The Structure of Scientific Revolutions (1962), the historian Thomas Kuhn noted that the final stage in a scientific breakthrough such as Albert Einstein’s theory of relativity—­that is, the most crucial stage—­was “inscrutable.”2 The same is still true half a century later. This lacuna is certainly not for lack of systematic investigation, stretching back to the creativity research boom of the 1950s and even to the time of the inventor Thomas Edison. Scientists, engineers, designers, and inventors have a vital interest in understanding and promoting exceptional creativity and innovation. So do business corporations, 3

andrew rob inson

patent offices, government committees and departments for science and technology, universities, and educational foundations around the world. Creativity and innovation are also much studied by academics from disciplines as diverse as psychology, business and management, economics, sociology, and history. However—­at the risk of stating the obvious—­exceptional creativity and innovation are extremely complex, varied, and far-­reaching phenomena, which have proved resistant to measurement, regulation, and planning. Indeed, these very characteristics are intrinsic both to their importance and their allure. The long-­term effects of scientific discoveries and innovations are generally unpredictable. According to Henry Ford, writing in 1929, “The motion picture with its universal language, the airplane with its speed, and the radio with its coming international programme—­these will soon bring the world to a complete understanding. Thus may we vision a United States of the World. Ultimately, it will surely come!”3 In the past decade, similar utopian claims have been made for the Internet and the World Wide Web. In science, to take just one example of the mystery, consider high intelligence and exceptional creativity. One might naturally expect there to be a strong correlation between these two mental faculties. Lewis Terman, the Stanford University psychologist who first popularized IQ testing in the United States (and the father of one of the engineers, Frederick Terman, who founded Silicon Valley), discovered a substantial group of high-­IQ (135-­plus) school students in California in the early 1920s—­soon to be nicknamed “Termites”—­and then monitored their success or failure as they grew into adulthood. But after following the group’s individual careers over several decades, Terman and his coworkers were obliged to admit that none of these gifted students, for all their considerable worldly success, had achieved anywhere near genius in any field. None of the Termites had won a Pulitzer Prize or a Nobel Prize, for instance; moreover, Terman’s initial IQ tests rejected the future Nobel Prize–winner William Shockley, after twice testing him at school, as they also did another future Nobel Prize 4

introduction

­ inner in physics, Luis Alvarez. Had Terman tested Richard Feynman, w yet another future Nobel laureate and an icon of twentieth-­century American physics, Terman would have rejected Feynman for his gifted group, too—­given the latter’s reported IQ score (a relatively modest 125), as measured at his school in New York around 1930. Today, it is widely accepted by psychologists that above an IQ of about 120 there is no correlation between IQ and exceptional creativity.4 In technology, the path from invention to innovation—­that is, an invention that leads to a commercially successful product—­often defies rationalization. When the telephone was invented in the 1870s, it was initially regarded not as an altogether new technology but rather as an improvement of the electric telegraph, which had been invented in the 1830s. Alexander Graham Bell’s 1876 patent on the telephone was titled “Improvements in Telegraphy.” In a letter to potential British investors, Bell argued, “All other telegraphic machines produce signals which require to be translated by experts, and such instruments are therefore extremely limited in their application. But the telephone actually speaks.”5 Almost as surprising, at least with the wisdom of hindsight, is the fact that the scientists and engineers who founded the computing industry in the 1940s–1960s did not foresee that personal computers would be useful to white-­collar workers in offices; that vision arrived only in the mid-­1970s with the founding of the companies Apple and Microsoft. Perhaps more understandable, yet still surprising, is the story of the invention of the laser and its subsequent widespread applications. When the first lasers became operational in the early 1960s, they were regarded as potentially useless for several years. Colleagues of Charles Townes, one of the laser’s inventors at Bell Laboratories, famously used to tease him by saying, “That’s a great idea, but it’s a solution looking for a problem.” As Townes admitted some four decades later, “The truth is, none of us who worked on the first lasers imagined how many uses there might eventually be.”6 Exceptional Creativity in Science and Technology arises from a symposium with the same title held in November–December 2008, at the 5

andrew rob inson

Institute for Advanced Study (IAS) in Princeton. Organized by the John Templeton Foundation, the symposium had as its chair the Nobel Prize–winning doctor and geneticist Baruch Blumberg, while its IAS host was the physicist Freeman Dyson. Nine of the chapters in the book (chapters 1, 2, 4, 5, 6, 7, 8, and 11, plus the Afterword by Freeman Dyson) are from participants in the symposium, while a further three (chapters 3, 9, and 10) are invited contributions from nonparticipants, intended to extend the range of the book by discussing a seminal institution, Bell Laboratories, during its most creative period; the development of innovative ideas into commercial products; and the experiences of a current inventor working within a leading university physics department. The original intention was that the book would be jointly edited by its current editor with the assistance of Baruch Blumberg, but sadly he passed away in 2011. The IAS symposium’s chief aim was to discuss the relationship between exceptional scientific creativity and innovation, individuals, and institutions—­with the focus on the United States and Europe during the modern era. Key questions for the participants included: What characterizes a creative environment? Which environments most effectively nurture the highest levels of scientific creativity? What is the sequence and range of steps leading to the creation of a high-­creativity institution? Do structures such as organizational governance and operating principles matter? To what degree is autonomous self-­governance in research institutions important? What effect do government funding and regulation have on creativity? To what extent does freedom of inquiry matter? The study of failures and declines, as well as successes, is highly illuminating. What are the dynamics that portend fiascos in attempts to build creative institutions? When and how does an institution lose its creative edge? How do geographically large, amazingly creative domains such as Silicon Valley come to be, and how can they be developed elsewhere? How are gifted young people most effectively encouraged and prepared to pursue careers marked by exceptional creativity in mathematics, science, technology, or technologically based 6

introduction

industry? All of these questions were discussed at the symposium, and the responses to them form part of this book. The complexity of the subject is well illustrated by the Princeton University physicist Philip Anderson in his chapter on “Physics at Bell Labs, 1949–1984: Young Turks and Younger Turks”—­a personal, entertaining, but nonetheless penetrating memoir by a Bell Labs insider who was a member of the technical staff of Bell Labs from 1949 to 1984 and was awarded a Nobel Prize in 1977—­one of seven Nobels in physics awarded for work done at Bell Labs. Before the Second World War, Bell Telephone Laboratories, though already significant innovators in the communications business, did not encourage curiosity-­driven research. What was it, asks Anderson, that turned the postwar Bell Labs into an international byword for scientific creativity and invention? Many people have pondered this question, including several of the contributors to this book in addition to Anderson. As the science writer Jon Gertner makes clear in his well-­researched study published in 2012, The Idea Factory: Bell Labs and the Great Age of American Innovation, Bell Labs veterans, not to mention outside commentators, have come up with somewhat different answers.7 All agree that the government-­approved monopoly over the U.S. telephone industry enjoyed until 1984 by AT&T, Bell Laboratories’ parent company, created a unique situation that encouraged the company to support fundamental research—­at least partly for reasons of public interest, but also to protect their telephone monopoly from periodic government attempts at deregulation. (One is somewhat reminded of a similar situation in Britain with the BBC, which enjoyed a monopoly in broadcasting until the mid-­1950s.) Another key factor in the Labs’ success was surely the generous salaries and the prodigious resources lavished on Bell Laboratories’ technical staff as a result of AT&T’s financial wealth. But according to Anderson, the deeper answer to the question is more subtle than simply these political and financial advantages. It involves both the culture of the postwar United States and the culture of Bell Labs’ management, which for a long period attached more value to 7

andrew rob inson

scientific and technological creativity than to the commercial exploitation of its groundbreaking inventions. Anderson writes: Thinking about [the situation] after all this time, it almost seems inevitable. The key is that in that early burst of hiring after the war, and the next round brought on by the exhilaration of success, the Labs had done far too well: they had all but cornered the market. At the same time, the rest of the world was more and more waking up to the fact that solid-­state physics was something worth pursuing. Finally, the people they had hired were of a type and quality that was certain to find the status of docile wage slave, no matter how paternally coddled, a bit uncomfortable. Bell Labs’ management, says Anderson, had two alternatives in the early 1950s. “One was to let us all go, once we had realized our value in the outside world—­to replace us with more docile, if less creative, scientists who would do what they were told.” The other alternative was to change the company’s style of management by abandoning the conventional hierarchical structure of an industrial laboratory and introducing some of the elements of a high-­flying university. This was the decision, Anderson observes, that William Shockley, one of the three discoverers/inventors of the transistor at Bell Labs in the late 1940s,8 forced on the Labs as a consequence of his overbearing treatment of his co-­inventor, John Bardeen. In 1951 Bardeen decided to leave Bell Labs to work more freely in a university setting, where in due course he became the world’s only double Nobel laureate in physics. Bardeen’s unwanted departure was a lesson to Bell Labs’ management. “It is very much to the credit of the Labs’ management of the time,” comments Anderson, “that they chose the other alternative, namely to switch to a very different management style in order to hold on to what they realized was an irreplaceable asset.” We shall return to this overarching issue of the role of national and 8

introduction

institutional culture in exceptional science and technology. But first, a few words on the structure of the book. The eleven chapters are arranged in a roughly chronological order. Thus, the first six are chiefly, though by no means exclusively, about the history of the subject. They begin with a survey of the past 250 years; move on to consider key institutions in science, such as Niels Bohr’s Copenhagen Institute for Theoretical Physics, and key scientific and technological breakthroughs, such as the decoding of the structure of DNA at the Cavendish Laboratory in Cambridge and the building of an electronic computer at the IAS in Princeton; and end with the sources of modern engineering innovation, in a chapter which examines examples of historic technological breakthroughs—­for instance, the electric power grid, the automobile, the airplane, and the microchip—and tries to draw lessons from these earlier innovations for our contemporary world. This leads naturally to the second half of the book, which deals mainly with the conditions required for the current flourishing of exceptional science, technology, and innovation in universities, research institutes, and companies, finishing up with potential future developments that may arise from current and projected space missions—­in which Baruch Blumberg had a special interest as the founding director of the NASA Astrobiology Institute from 1999 to 2002. While the emphasis of the book’s first half is more on exceptional science, that of the second half is more on exceptional technology. In chapter 1, “The Rise and Decline of Hegemonic Systems of Creativity,” the sociologist J. Rogers Hollingsworth and his coauthor David Gear examine the reasons for the successive dominance of four national systems of science in the world: first in France from about 1750 to the early decades of the nineteenth century; then in Germany from the mid-­nineteenth century to the first decade of the twentieth century; then in Britain during the first half of twentieth century; and finally in the United States during the second half of the twentieth century to date—­an American dominance that some fear may now be declining, possibly to give way to the rise of science and technology in 9

andrew rob inson

Asia, especially China. The authors argue that highly creative science systems became embedded only in those countries that were economic, political, and military hegemons. A country’s economic, political, and military hegemonic power gave birth to its scientific hegemony. A scientific hegemon tended to dominate the world’s leading scientific journals, it established standards of excellence in most scientific fields, its language became the dominant one in facilitating scientific communication with other countries, its leading scientists acquired prominence in the global world of science, and young people flocked to the hegemonic country for training. However, as a country’s economic, political, and military power began to decline relative to other countries, a relative decline took place in its scientific creativity. There can be no doubt that France, Germany, Britain, and the United States did indeed dominate the world scientific scene during the periods ascribed to them above. Yet the existence of a direct link between exceptional scientific and technological creativity and national economic, political, and military dominance may not be as straightforward as appears at first sight. For example (as noted by the authors), science, especially physics, flourished at the University of Göttingen in the 1920s, despite the collapse of German power and the German economy after the First World War, until the rise of the Nazis in 1933. It also flourished in Copenhagen during the same period and after, despite Denmark’s economic, political, and military insignificance. Although most of the credit for this belongs to Niels Bohr, his institute received financial support from both the Danish government and Danish foundations (as discussed by Gino Segrè in chapter 2). In India, exceptionally creative science was also carried out during the first half of the twentieth century by Indians such as the Calcutta-­based theoretical physicist S. N. Bose, who collaborated with Einstein on Bose-­ Einstein statistics (hence the fundamental particles known as bosons), and the Bangalore-­based experimental physicist C. V. Raman, who was awarded Asia’s first Nobel Prize in science in 1930. None of this interna-

10

introduction

tionally renowned Indian scientific research was actively encouraged by the politically dominant British colonial power.9 But perhaps the most problematic example for the above thesis is that of Britain. Exceptionally creative science occurred in Britain long before the United Kingdom and its empire became the world’s dominant power in the late nineteenth century, beginning in the seventeenth century with William Harvey, Isaac Newton, Robert Boyle, Robert Hooke, and some other fellows of the Royal Society. Their discoveries were followed by the first industrial revolution, starting in England in the second half of the eighteenth century. Exceptional science and technology continued in the second half of the twentieth century after the disappearance of Britain’s international political dominance. As Hollingsworth and Gear observe, British scientists have received more than two dozen Nobel Prizes for work begun after 1950. Something more than Britain’s economic, political, and military hegemony in the first half of the twentieth century must have been at work in the development of exceptional British science and technology. A good picture of what this something was appears in the foreword to a recent book on science and technology in eighteenth-­century England, written by two historians of science, the astronomer Patrick Moore and Allan Chapman. After mentioning some of the important seventeenth-­century British scientists, Moore and Chapman comment: As well as providing this succession of great discoveries, Britain’s wider culture proved very conducive to scientific and technological innovation. Despite the poverty and despair highlighted by artists like William Hogarth and other social commentators, Britain (and England in particular) had the largest, richest, and most independent middle class in world history: a people who had both formed and been formed by a variety of circumstances peculiar to Britain and its emerging American colonies. These included a Parliament-­based

11

andrew rob inson

political and legal tradition, a popular monarchy whose powers were limited by statute, a free press, a tolerant state Church from which one had the right to “Dissent,” and a flourishing economy, with a great deal of spare cash being generated from overseas trade and industrial enterprise and by the already globally dominant City of London. So Georgian England had numerous comfortably-­off middling country gentry, clergy, and professionals, with money to spend and leisure in which to enjoy its fruits. Moreover, since it was regarded as a gentleman’s prerogative to act freely, an individual might spend his money as wisely or as foolishly as he chose. Besides sport, theatre, balls, and fun, such people actively pursued art, music, and science. Indeed, a gentleman’s credibility rested on his knowledge of “culture,” and by 1750 this would include such things as Newton’s Laws or how clocks worked, as well as architecture, perspective drawing, literature, racehorses, dogs, and boxing.10 Just such a culture encouraged the development of one of England’s greatest polymaths, Thomas Young, who was born in the rural county of Somerset in 1773 and grew up among Quaker bankers, about whom I wrote a biography titled The Last Man Who Knew Everything. Best known in science for his discovery of the interference of light waves in his double-­slit experiment around 1801, Young (whose profession was actually medicine) has numerous other claims to fame—­in physics, the physiology of the eye, linguistics, and even Egyptology, where he was the first to decipher the Rosetta Stone with a degree of success and thereby launched the decipherment of the Egyptian hieroglyphs. In an exhibition on Young arranged by London’s Science Museum for his bicentenary in 1973, the organizers went so far as to state, “Young probably had a wider range of creative learning than any other Englishman in history. He made discoveries in nearly every field he studied.”11 National cultures, even though we may struggle to define them, 12

introduction

are therefore influential on exceptional creativity in science and technology. Moreover, they can inhibit creativity, as well as encourage it. Inhibition has tended to be the case, at least until very recently, in Asian cultures. Postcolonial India, for example, has failed to fulfill the scientific promise shown by its exceptional scientists in the first half of the twentieth century, largely because of the Indian government’s bureaucratic dominance of Indian universities and even the best Indian research institutes, coupled with the debilitating effect of caste politics. Although Indians—­for instance, the astrophysicist Subrahmanyan Chandrasekhar—­have won Nobel Prizes since C. V. Raman in 1930, in each case the prize was awarded for work carried out in universities in the United States or Europe, not in India. Regarding Japan, the Japanese-­born engineer Shuji Nakamura, who pioneered the white and blue LEDs (light-­emitting diodes) while working for a company in Japan, later immigrated to the United States because he found it difficult to function successfully in Japan as an inventor. “Everything is different in the U.S. Individuals are important, whereas in Japan the group as a whole is valued more,” Nakamura said in 2007. “This might be the strength of U.S. research, because inventions always come from individuals rather than from groups. I think individual creativity is more likely to flourish in the U.S. system, whereas the system in Japan is more suitable to mass production.”12 Regarding China, a Chinese-­born environmental scientist, Peng Gong, with experience of working in universities in both China and the United States, wrote in the scientific journal Nature in 2012, Two cultural genes have passed through generations of Chinese intellectuals for more than 2,000 years. The first is the thoughts of Confucius, who proposed that intellectuals should become loyal administrators. The second is the writings of Zhuang Zhou, who said that harmonious society would come from isolating families so as to avoid exchange and conflict, and by shunning technology to avoid greed. 13

andrew rob inson

Together, these cultures have encouraged small-­scale and self-­sufficient practices in Chinese society, but discouraged curiosity, commercialization, and technology. They helped to produce a scientific void in Chinese society that persisted for millennia. And they continue to be relevant today.13 The influence of cultures, whether national or institutional, or both together, turns out to be a connecting theme of this book. Although it is explicit in chapters 1, 2, 3, 4, 7, and 11, it is implicit in chapters 5, 6, 8, 9, and 10, as well as the Afterword. In chapter 2, “Exceptional Creativity in Physics,” physicist Gino Segrè identifies the circumstances that foster exceptional creativity in science, especially physics, by concentrating on two important historical institutions: the Copenhagen Institute for Theoretical Physics (now renamed the Niels Bohr Institute) during the 1920s, and Enrico Fermi’s group at the University of Rome during the 1930s. The first institution played a key role in the development of quantum mechanics, the second in the development of nuclear physics. The chapter considers the particular times, places, environments, mentoring, and individuals involved, and tries to draw some general conclusions. What is the opportune time to recognize that a field has evolved to the point where exceptional intellectual challenges—­the 1913 Bohr model of the atom (for Bohr’s group) and James Chadwick’s 1932 discovery of the neutron (for Fermi’s group)—­or their applications can significantly alter the way people live and work? In what kind of institutions can one address these challenges? What sort of encouragement is beneficial, and how important is it to have a variety of backgrounds and training? In what ways do mentors need to be involved in the research in question? Who are the exceptional individuals who can respond to these challenges, and how are they identified? What type of educational system promotes their development? How can such individuals be encouraged and supported? Although Bohr was essentially a theorist while Fermi was an experimentalist who was also a theorist, many of their answers to the 14

introduction

above questions were similar, and their approaches later influenced the working of other research institutions in Europe, including CERN, and in the United States, where Fermi settled in 1938 and played a key role in making the atomic bomb. Next we come to chapters on two leading U.S. research institutions: the first on Bell Laboratories by Philip Anderson, discussed earlier, and the second on the Institute for Advanced Study in Princeton, “The Usefulness of Useless Knowledge,” written by historian of science and technology George Dyson. His title is borrowed from the title of a magazine article published in 1939 by Abraham Flexner, the founding director of the IAS. “The pursuit of these useless satisfactions proves unexpectedly the source from which undreamed-­of utility is derived,” wrote Flexner.14 The IAS does not maintain laboratories, develop new technology, or grant appointments to engineers. The one exception to this policy—­John von Neumann’s Electronic Computer Project, launched in 1945 and terminated in 1958—­contributed more to the advance of human knowledge (including mathematics, physics, biology, economics, environmental science, politics, and nuclear weaponry) than any other undertaking in the history of the IAS. This chapter reviews the founding of the IAS and the birth of its intellectual culture, and explains how one of the most influential engineering achievements of all time came to fruition at an institution specifically designed to avoid experimental research. Instead of importing a handful of mathematical logicians into one of the existing organizations that had the facilities and resources to build a computer (like the Massachusetts Institute of Technology), von Neumann imported a handful of engineers into the IAS. Free of any preconceptions as to how the new machine should be designed, built, or used, they unleashed one of the more exceptionally creative episodes in the history of humanity, technology, and biology. My own chapter, “Education and Exceptional Creativity,” focuses not on a single institution, but on the way in which several educational and research institutions contributed to the solving of two important problems in the 1950s. I begin by looking generally at the education 15

andrew rob inson

of exceptionally creative scientists (and some artists, too, by way of comparison). Formal education has long had an uneasy relationship with exceptional creativity—­perhaps most notably in the arts, but also in science. Pioneering figures such as Newton, Darwin, Marie Curie, and Einstein had university training, which they found necessary and sometimes stimulating. Yet their initial breakthroughs were made when working outside of a university, and required them to reject ideas then prevailing in the academy. There are plenty of other examples of this phenomenon, such as the extraordinary Indian clerk-­turned-­ mathematician Srinivasa Ramanujan a century ago, and the phenomenon continues to be important and intriguing, if hard to analyze. It reveals itself in a thought-­provoking way by comparing the decoding of the structure of DNA in 1953 with the decipherment of Minoan Linear B, achieved at the same time. Both discoveries involved five key individuals, four men and one woman in each case, who played curiously similar roles; however, the roles of the major participating academic institutions significantly differed. The two stories do not point to a straightforward link between institutional support and the encouragement of exceptional creativity. The vigorous culture of give-­and-­take at the Cavendish Laboratory in Cambridge clearly encouraged creative theoretical solutions to the decoding of DNA. The lack of this culture at King’s College, London discouraged them, but permitted crucial accumulation of data. Linear B was deciphered essentially without any institutional support. The California Institute of Technology and the University of Oxford, despite having major resources and highly relevant expertise, failed to grasp the opportunities. Perhaps the most striking overall conclusion is that the two most creative figures in these triumphs—­Francis Crick (DNA) and Michael Ventris (Linear B)—­ were the least institutionalized. Reference has already been made to the content of chapter 6, “The Sources of Modern Engineering Innovation,” written by Princeton University engineer David P. Billington and David P. Billington Jr. They argue that the transformation of American life by its industrial revolu16

introduction

tion in the late nineteenth and early twentieth centuries “was primarily the work of engineering that embodied radically new technical ideas, in which the contribution of science came after the breakthrough, not before it.” They describe in several case studies how major engineering innovations actually occurred in the United States, including Edison’s development of the electric power grid, the first powered flight by the Wright brothers, and the inception of the integrated circuit (the microchip) by Jack Kilby and Robert Noyce. These examples show that trained knowledge and skills are not sufficient to generate radical innovations. The breakthroughs were the result of deeply original thinking by one or two individuals who broke with conventional notions. The institutions and settings commonly associated with technical creativity, such as industrial laboratories and centers of high technology, have been the result of these breakthroughs and not the cause of them. Given the mounting concern in the early twenty-­first century over whether the United States and other advanced countries can sustain the technical capacities that enabled them to prosper in the twentieth century, the authors propose that a new generation can best learn how to innovate radically by studying examples of radical innovation as part of a general high school and undergraduate education. The engineer Susan Hackwood, who was a member of the technical staff at Bell Laboratories from 1979 to 1984 and later moved to the University of California, is also preoccupied with technical education in the United States. In her chapter, “Technically Creative Environments,” she accepts that we have much more reliable knowledge of the factors that inhibit creativity—­such as the appointment of intelligent but not creative people as managers and leaders—­than of what causes or enhances creativity. Based on her exposure to the creative technical and management culture at Bell Labs, and her wide-­ranging experience of science education in California, she outlines such academic knowledge of creativity and uses it to consider how to foster technically creative environments. In particular, how does a group of technically expert and creative individuals work together to achieve higher creativity? 17

andrew rob inson

How should they be governed, supported, and appreciated as independent thinkers? Investigating to what extent the quality of life of the community in which research centers and universities are located correlates with the creative output of the institution is an important field of research, she argues. After looking at the factors that influence the production and retention of creative people, the chapter discusses a few practices that are disastrous and some that work. It also argues that the number of technically creative people who can realistically be imported into the United States far exceeds the maximum number that can ever be produced in-­house, even with the best social and educational methods. Thus, the United States should continue to foster the international brain mobility of scientists of recent decades in all possible ways. In the next three chapters, the discussion moves to the realities of how inventions are developed and marketed as commercial products. The first of these chapters, “Entrepreneurial Creativity,” is by Timothy Bresnahan, an economist working at Stanford University. He starts from a familiar truth: exceptionally creative scientists and inventors do not always exploit their research and inventions successfully. Indeed, a good example of this failure is Bell Laboratories, which even in its Nobel Prize–winning heyday was poor in entrepreneurial skills. Improvements in the standard of living for all the world call for both scientific creativity and “entrepreneurial creativity”—­meaning the ability, as defined by the author, “to locate and exploit overlaps between what is technically feasible and what will create value for society.” The conception of new ideas, the creation of new products and processes, and their market use in improving people’s lives are all essential in creating a better-­off society. Entrepreneurial creativity, like scientific and engineering creativity, is critical for long-­run, innovation-­based economic growth, but although it is related to scientific and engineering creativity, it is distinct from them. To the entrepreneur, market knowledge is no less important than scientific or technical knowledge. The author, who has studied the entrepreneurial culture of Silicon Val18

introduction

ley over many years, draws his examples mainly from the history of computing. One of his goals is to state with reasonable precision what entrepreneurial creativity is, and to list with reasonable completeness the parts of technical advance for which entrepreneurial creativity has been responsible. Another goal is to analyze the best institutions, especially at the boundaries between science or engineering and entrepreneurship, to see how to enhance creativity in the interests of long-­run economic growth and improvement. “Scientific Breakthroughs and Breakthrough Products,” chapter 9, looks at entrepreneurial creativity from the point of view of the scientists and designers. It is jointly written by a former British university physicist, Tony Hey, who is now a vice president at Microsoft Research, based in the United States, where he is responsible for building partnerships with the academic community, and his son Jonathan Hey, a user experience designer based in London, who previously studied design at the University of California, Berkeley. Academic scientists have long had links with industry; even Einstein, the onetime patent clerk, designed a leak-­proof refrigerator for the companies Electrolux and AEG. But successful new product development is a challenging activity, with only 30 percent of new products surviving to their second year on the store shelves. How do such very different institutions—­ universities and companies—­encourage creative outcomes in the marketplace? To the scientist, originality is paramount for a research breakthrough, whereas the majority of a company’s customers do not care that a product is new and different, only that it meets their needs. In universities, scientists typically work alone or in small groups, whereas in industry large teams are typical, often drawing their input from multiple research laboratories. The authors examine university and industry roles in the development of the World Wide Web, wireless sensor technology, Microsoft’s Kinect motion sensor for the Xbox video game, and the Segway Personal Transporter, a novel vehicle that has so far failed to find a large market. The importance of framing design challenges around user needs emerges from these case studies. 19

andrew rob inson

Through a better understanding of user-­centered technology design and multidisciplinary teams, it is possible to formulate a set of guidelines and practices to help new product design teams navigate the difficult design phases. The former Oxford University physicist Joshua Silver is at the sharp end of such design issues. His professional career has been spent in atomic physics at Oxford’s Clarendon Laboratory. However, in the 1980s he used his university-­trained knowledge of optics to invent a new type of self-­adjustable eyeglass—­using fluid-­filled lenses. His initial impetus was simply curiosity to solve the problem of a variable-­­focus lens. But soon he visualized the invention’s usefulness for the many millions of people in the developing world who have no access to an optometrist and no money to pay for a pair of conventional, fixed-­focus eyeglasses. In chapter 10, “A Billion Fresh Pairs of Eyes,” Silver describes his invention and the subsequent lengthy process of its development. After some years of field trials in Africa, China, and the United States by vision specialists, with the support of bodies such as the World Health Organization and the World Bank, the self-­adjustable eyeglasses are now in production as a collaboration between the U.K.-­based Centre for Vision in the Developing World in Oxford—­which Silver helped to found in 2009—­and the U.S.-­based Dow Corning Corporation, a global leader in silicon-­based technology. Although earlier versions of the eyeglasses, known as Adspecs, have been successfully used by wearers in Africa and Asia, their relatively clunky appearance continues to pose a problem, especially for younger, fashion-­conscious wearers, but Silver is confident that technical improvements to the design will enable aesthetic improvements. In 2011 an audience of health professionals in London at the National Health Service’s Innovation Expo, following Silver’s demonstration of the Adspecs, voted self-­adjustable refraction to be “The idea most likely to make the biggest impact on health care by the year 2020.” His ambitious goal is to have a billion people wearing self-­adjustable eyeglasses by 2020.

20

introduction

To conclude, the last chapter conjoins exceptionally creative science and exceptionally creative technology. “New Ideas from High Platforms” by Baruch Blumberg moves into space, beginning with a personal memoir of the founding of the NASA Astrobiology Institute (NAI) in 1999 and its active links with astrobiology programs in other countries. As a deliberate part of its creative culture, Blumberg’s management of the NAI was dispersed rather than command-­and-­control. As in universities (and at Bell Laboratories during its most creative years), NAI teams were encouraged to collaborate with whomever they thought would enrich their research. The second part of the chapter then deals with creativity in space. Searching in previously unreachable locations increases the possibility of encountering the new. By observing from high platforms never before available, we have a rich source of new observations and new concepts. A few of the current and projected range of space-­related missions of exploration and discovery are described, with the contributions that these discoveries have already made to our earthbound life; and the multigenerational nature of long-­ term space research is emphasized. For example, NASA’s Mars Science Laboratory rover is one of the most ambitious attempts to determine if microbial life does or could exist, or has existed, on Mars. NASA’s projected Kepler Mission will seek planets around other stars that could nourish life like that of the “Pale Blue Dot”—­the name given to the photograph of Earth taken from deep space by Voyager 1 in 1990 on its way out of the solar system. New products arising from space-­related activity include advanced lithium batteries to improve the capabilities of electric vehicles, space-­suit materials to help protect divers in hostile marine environments, and a method for manufacturing an algae-­based food supplement that provides the nutrients previously only available in breast milk. Thus, the exploration of space has enhanced, and will continue to enhance, not only science and technology but also the economy and life on Earth.

21

andrew rob inson

Acknowledgments As editor, I would like to thank all the contributors and the John Templeton Foundation, in particular Mary Ann Meyers, the senior fellow of the foundation, who participated in the 2008 symposium at the Institute for Advanced Study. Without her confidence and persistence this collection would not have appeared. Paul Davies kindly checked a draft of the chapter by Baruch Blumberg. Phil Anderson, a colleague and friend of my late father at Bell Laboratories from the 1950s onward, was supportive of my efforts—­as always.

Notes 1. Audrey Wood, Magnetic Venture: The Story of Oxford Instruments (Oxford: Oxford University Press, 2001), 104. Oxford Instruments was started in 1959 in the garden shed of the Wood family in Oxford. 2. Thomas S. Kuhn, The Structure of Scientific Revolutions, 50th anniv. edn. (Chicago: University of Chicago Press, 2012), 90. 3. Quoted in David Edgerton, The Shock of the Old: Technology and Global History since 1900 (London: Profile, 2006), 114. Edgerton’s book is a thought-­ provoking corrective to the common assumption that the latest technology is always the best, and that the pace of innovation constantly accelerates. “By the standards of the past, the present does not seem radically innovative,” Edgerton notes. “Indeed judging from the present, the past looks extraordinarily inventive. We need only think of the twenty years 1890–1910 which gave us, among the more visible new products, X-­rays, the motor car, flight, the cinema, and radio” (page 203). 4. See Andrew Robinson, Sudden Genius? The Gradual Path to Creative Breakthroughs (Oxford: Oxford University Press, 2010), especially chap. 2; and Joel Shurkin, Terman’s Kids: The Groundbreaking Study of How the Gifted Grow Up (New York: Little, Brown, 1992). 5. Quoted in Tom Standage, The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s Online Pioneers (London: Phoenix, 1999), 185. 6. Charles H. Townes, How the Laser Happened: Adventures of a Scientist (New York: Oxford University Press, 1999), 4. 7. See Jon Gertner, The Idea Factory: Bell Labs and the Great Age of American Innovation (New York: Penguin, 2012), 300, 350–52 (for the analysis of the Bell Labs engineer John R. Pierce).

22

introduction 8. Bardeen referred to his (and Walter Brattain’s) “discovery” of “transistor action,” while Shockley thought of the transistor as a device and therefore an invention, which could be patented. See ibid., 106–7. 9. See Abha Sur, Dispersed Radiance: Caste, Gender, and Modern Science in India (New Delhi: Navayana, 2011). 10. Foreword to Innovation and Discovery: Bath and the Rise of Science, ed. Peter Wallis (Bath: Bath Royal Literary and Scientific Institution and the William Herschel Society, 2008), 5. 11. Quoted in Andrew Robinson, The Last Man Who Knew Everything: Thomas Young (Oxford: Oneworld, 2006), ix. 12. Interview with Nakamura by Jane Qiu, in “The Blue Revolutionary,” New Scientist, January 6, 2007, 44–45. 13. Peng Gong, “Cultural History Holds Back Chinese Research,” Nature 481 (2012): 411. 14. Abraham Flexner, “The Usefulness of Useless Knowledge,” Harper’s Magazine, October 1939, 548.

Bibliography Edgerton, David. The Shock of the Old: Technology and Global History since 1900. London: Profile, 2006. Gertner, Jon. The Idea Factory: Bell Labs and the Great Age of American Innovation. New York: Penguin, 2012. Harford, Tim. Adapt: Why Success Always Starts with Failure. London: Little, Brown, 2011. Isaacson, Walter. Steve Jobs. New York: Simon & Schuster, 2011. Johnstone, Bob. Brilliant! Shuji Nakamura and the Revolution in Lighting Technology. New York: Prometheus, 2007. Kuhn, Thomas S. The Structure of Scientific Revolutions, 50th anniversary edition. Chicago: University of Chicago Press, 2012 (with an introductory essay by Ian Hacking). Meyers, Morton A. Prize Fight: The Race and the Rivalry to Be the First in Science. New York: Palgrave Macmillan, 2012. Pfenninger, Karl H., and Valerie R. Shubik, eds. The Origins of Creativity. New York: Oxford University Press, 2001. Robinson, Andrew. The Last Man Who Knew Everything: Thomas Young. Oxford: Oneworld, 2006. —­—­—­. Sudden Genius? The Gradual Path to Creative Breakthroughs. Oxford: Oxford University Press, 2010. Seabrook, John. Flash of Genius: And Other True Stories of Invention. New York: St. Martin’s Press, 2008.

23

andrew rob inson Shurkin, Joel. Terman’s Kids: The Groundbreaking Study of How the Gifted Grow Up. New York: Little, Brown, 1992. Simonton, Dean Keith. Creativity in Science: Chance, Logic, Genius, and Zeitgeist. New York: Oxford University Press, 2004. Standage, Tom. The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s Online Pioneers. London: Phoenix, 1999. Sur, Abha. Dispersed Radiance: Caste, Gender, and Modern Science in India. New Delhi: Navayana, 2011. Townes, Charles H. How the Laser Happened: Adventures of a Scientist. New York: Oxford University Press, 1999. Van Dulken, Stephen. Inventing the 20th Century: 100 Inventions That Shaped the World. London: British Library, 2000. Wallis, Peter, ed. Innovation and Discovery: Bath and the Rise of Science. Bath: Bath Royal Literary and Scientific Institution and the William Herschel Society, 2008. Weisberg, Robert W. Creativity: Understanding Innovation in Problem Solving, Science, Invention, and the Arts. Hoboken, NJ: John Wiley, 2006. Wood, Audrey. Magnetic Venture: The Story of Oxford Instruments. Oxford: Oxford University Press, 2001.

24

Cha p t er On e

The Rise and Decline of Hegemonic Systems of Scientific Creativity J. Rogers Hollingsworth and David M. Gear

Addressing creativity at the level of a society has a long tradition, whether it be the ancient Greek city-­state, sixteenth-­century Florence, France during the Enlightenment, the United Kingdom during the Industrial Revolution, or other societies. In order to gain a fresh perspective on creativity in the contemporary world, this chapter extends the tradition by focusing on the rise and decline of creativity at the level of the nation-­state during the past 250 years. Despite our focus on the societal level, we recognize that most acts of creativity occur at the level of the individual. But by aggregating acts of creativity, it becomes possible to analyze creativity at the level of a society, at the level of an organization (e.g., Bell Labs in the United States, the Laboratory of Molecular Biology in the United Kingdom, the Max-­Planck Institutes in Germany; Rockefeller University in the United States), and at the level of a university department (e.g., physics at the University of Göttingen in the 1920s; the Cavendish Laboratory in Cambridge during much of the twentieth century). Since the mid-­eighteenth century, the most highly creative systems of science have been embedded only in those societies that were hegemons (from the ancient Greek word hegemon, meaning “leader”). A 25

j. ro gers hollingsworth and david m. gear

hegemonic power is one that exercises political, economic, and military supremacy over all other powers during a particular historic period. It was a society’s economic, political, and military hegemonic power that gave birth to the creative scientific hegemon. A scientific hegemon dominates multiple scientific fields and establishes the standards of excellence in most scientific fields. Its language is the major one used in scientific communication, and its scientific elite is the one most prominent in the world of science. It attracts more foreign young people for training than any other country. Its scientific culture tends to reflect society’s culture. Scientific hegemons are embedded in societies that are economic, political, and military hegemons—­but not all political, economic, and military hegemonic powers develop a hegemonic scientific system. However, modern hegemonic scientific Figure 1. The Rise and Decline of Hegemonic Systems of Science c.1840 France

c.1920

c.1735

2012

c.1950

c.1900

c.1930 2012

Germany c.1830

c.1950 c.1930

c.1965

Britain

2012 c.1980

c.1870

c.1980 United States c.1930 26

2012

th e rise and decline of hegemonic systems

systems exist only in societies that are political, economic, and military hegemons. The process by which scientific hegemons emerged as well as declined varied from society to society—­although the underlying explanation was the same in each society. When their systems began to decline, the elites in scientific hegemons often failed to understand this fact; indeed, they tended to believe that their systems were continuing to perform extraordinarily well. Only as a result of a retrospective analysis was a hegemonic system of science observed to have been in relative decline. Figure 1 is a representation of the rise and decline of four hegemonic systems of science since the middle of the eighteenth century: French, German, British, and American.

French Hegemony From around 1735 until the mid-­nineteenth century, France led the world in scientific creativity—­particularly in the fields of mathematics, physics, physiology, clinical medicine, zoology, and paleontology. A few of the most prominent French scientists of this period are listed in Table 1.

Table 1. Distinguished French Scientists, 1770–1850 Claude Bernard Claude Berthollet Charles-­Édouard Brown-­Séquard Jacques Charles Joseph Fourier Augustin Jean Fresnel

Joseph Louis Gay-­Lussac Antoine-­Laurent de Jussieu René-­Théophile Laennec Jean-­Baptiste Lamarck Pierre Laplace Auguste Laurent 27

Antoine Lavoisier François Magendie Étienne Louis Malus Charles Messier L. B. Guyton de Moreau Urbain Jean Joseph Le Verrier

j. ro gers hollingsworth and david m. gear

As France was a great power during the latter part of the eighteenth century, it is not surprising that it became a scientific hegemon. The world’s leading scientific journals were published in France, the major scientific language was French, many of the world’s most accomplished scientists were French, and large numbers of young people from all over Europe went to France for training. However, the turmoil brought about by the French Revolution and the military adventures of Napoleon Bonaparte had long-­term negative effects on France’s military, economic, and scientific influence. Of course, the decline of France’s distinction in science did not occur all at once. Indeed, throughout the nineteenth century and even through the early years of the twentieth century, many of the world’s most eminent scientists were French. France’s role as a scientific hegemon did not decline simply because of its relative decline in military, economic, and political power. There were inherent contradictions in French society that had profound implications for its science. Part of the problem was the centralization of French government. Before and somewhat after the French Revolution, the centralization of France was significant in accelerating the rapid growth of France’s role in world affairs, including its system of science. But during the nineteenth and on into the twentieth century, centralization had an adverse effect on France’s ability to adapt to many of the radical innovations occurring elsewhere in the world, especially in the rest of Europe. This was, of course, not true of all aspects of French society. Indeed, in the first half of the nineteenth century, outstanding scientific research occurred in the Collège de France in Paris and in several of the grandes écoles. Moreover, even as basic science in France declined during the nineteenth century, the society excelled in the development of large-­scale technological systems. Jean-­Baptiste Colbert, late in the seventeenth century, led France in making it a world leader in this area, a tradition that continued into the twentieth century with the development of French trains and aircraft. Colbert was a pioneer in developing applied science through government activity. As a result, the French 28

th e rise and decline of hegemonic systems

state developed an excellent system of schools to train technocrats: the École des Mines, the École des Ponts et Chaussées, the École de Génie Militaire, and the world-­renowned École Polytechnique. At the École Polytechnique the dominant epistemology emphasized deductive reasoning, complemented by rigorous mathematics. However, partly because of its heavy investment in technological training in the development of large-­scale projects, the French state underinvested in the training of young scientists. For the past several centuries, French society has long admired highly achieving individuals, but has been miserly in investing in the development of individual creativity. Throughout the nineteenth and twentieth centuries, the celebration of great scientists and other intellectuals was an important part of French culture. But among the four societies discussed in this essay, none was more parsimonious and lacking in foresight than France in providing individual scientists with the financial and organizational resources they needed for outstanding research.1 From the middle of the nineteenth century, while German universities were providing the finest equipment for laboratories, some of France’s greatest biomedical scientists—­François Magendie, Claude Bernard, Charles-­Édouard Brown-­Séquard, Louis Pasteur, as well as Pierre Curie and Marie Curie—­often had to work under abominable conditions. It is a tribute to the French system of education, with its emphasis on individual brilliance and creativity, that these scientists performed so well despite their inadequately developed and underfunded research organizations. Over the years, scientists in France, in comparison with those in Germany, Britain, or the United States, more often than not had to operate in crowded laboratories, rely on obsolete equipment, and endure periodically the deleterious effects of inflation. Even when the French government provided ample funding for laboratories, the method of governance was highly centralized. While there was some variation in the type of state-­run organizations dedicated to research—­the universities, the Collège de France, hospitals, and the Musée de l’Histoire Naturelle (not a museum but a training and 29

j. ro gers hollingsworth and david m. gear

research center)—­these different organizations enjoyed little autonomy or flexibility, which naturally hampered their capacity to make major discoveries. Numerous accounts have described how the French university system has long been embedded in a highly centralized ministry of education that determined salaries and promotions. Letters of evaluation were often written largely by friends and mentors. Historically, an enormous amount of favoritism and organizational nepotism had been present.2 Some of France’s most distinguished scientists expressed harsh criticism of the system: its lack of funds, the mediocrity of its science, its perpetuation of antiquated disciplines and its reluctance to develop new ones, and the incompetence of its administrative personnel. Pasteur, Bernard, and Adolphe Wurtz all wrote scathing reports on French science. According to Terry Shinn, the files of applications of young people wishing to be trained as scientists became voluminous as the French government demanded information about the applicants’ families. But the applications were then often filed away without any response to the applicant. In the meantime, buildings deteriorated: roofs leaked, floors flooded, and walls crumbled. There are many reports of insufficient light and lack of running water in laboratories; for lack of adequate storage facilities, equipment sometimes simply vanished. These conditions were obviously disincentives for young people thinking of becoming scientists, while many of those who had embarked on a career in science lost their ambition to conduct research.3 In areas of creative activity with few expectations of funding by the state, such as in the arts, France excelled. One has only to think of French literature, painting, and sculpture in the nineteenth century. But in science, after the first third of the nineteenth century, the centralized state stifled individual creativity, except in the service of large-­ scale collective projects. Coupled with the decline of French political and economic hegemony on the world stage, France’s capacity to remain a scientific hegemon was diminished.4 30

th e rise and decline of hegemonic systems

German Hegemony From France the world’s center of scientific creativity shifted to Germany, which became the world’s scientific hegemon from about 1840 to the 1920s—­a consequence of economic prosperity and a powerful political elite with a strong military organization. From the middle of the nineteenth century until the early twentieth century, twenty prominent German research universities emerged, and Germany had a far larger number of serious research universities than any other country. The new type of German university produced many of the world’s most creative mathematicians, physicists, chemists, biochemists, and biologists. Germany had the world’s best-­equipped laboratories and scientific institutes—­such as the Kaiser Wilhelm (later Max-­Planck) Institutes—­and growing science-­based industries in pharmaceuticals, dyes, and vaccines. In the first eleven years of the Nobel Prizes, from 1901 onward, thirteen German scientists received awards in physics, chemistry, and physiology or medicine—­many more than any other nationality. From 1880 until 1920, German science dominated numerous fields and established new standards of excellence. The leading scientific journals of the day were based in Germany, making German the major language for scientific communication. Germany attracted more foreign young people to study in its universities than any other country. Tens of thousands of young Americans traveled to Germany in the late nineteenth and early twentieth centuries for advanced training—­a factor that led to the transformation of research in the United States. However, like France, fundamental contradictions were built into both the culture of Germany and its science system, particularly its high level of authoritarianism—­a factor that would later place constraints on the creativity of German science. Respect for authority in society facilitated the rapid emergence of German universities, but would ultimately be a factor in their relative decline. Because most university departments had only one professor, senior professors tended to incur 31

j. ro gers hollingsworth and david m. gear

heavy responsibilities for teaching across all fields in their particular discipline, limiting their ability to specialize, and heavy administrative burdens, limiting their time for research. In due course, creative research in most scientific disciplines began to level off. The increasing inability of German universities to create new disciplines necessitated the creation of the Kaiser Wilhelm Institutes in 1911, resulting in a surge of creative research, at least for a while. The first institutes were in Dahlem, a suburb of Berlin. They were established in physics, in various fields of chemistry, and in the biological sciences—­all concentrated within a few hundred meters of each other—­which contributed to Dahlem becoming one of the most creative centers of science anywhere (see Table 2). Table 2. Distinguished Scientists at the Kaiser Wilhelm Institutes in Dahlem, Germany Albert Einstein (N) James Franck (N) Richard Goldschmidt Fritz Haber (N) Otto Hahn (N)

Hans Krebs (N) Lise Meitner Otto Meyerhof (N) Carl Neuberg Michael Polanyi

Axel Theorell (N) Otto Warburg (N) Richard Willstätter (N) (N) = Nobel Prize winner

Among the scientists appointed to these institutes were Albert Einstein, Richard Goldschmidt, Fritz Haber, Otto Hahn, Lise Meitner, Otto Warburg, and others of great distinction. One most important attraction and facilitator of interaction among scientists at the Dahlem institutes were the “Haber colloquia” held every Monday afternoon. Among those frequently in attendance were the scientists mentioned above; more occasional attendees included Niels Bohr, Peter Debye, Selig Hecht, Max von Laue, Max Planck, Walther Nernst, Erwin Schrödinger, and Arnold Sommerfeld. Soon there were Kaiser Wil32

th e rise and decline of hegemonic systems

helm Institutes in various parts of Germany, though none as creative as those in Dahlem. The institutes would not have been possible without the hegemonic power of the German empire. Then Germany’s political elites and military powers overreached themselves, resulting in Germany’s defeat in the First World War, the loss of considerable territory, and a disrupted economy. By the early 1920s, all of these factors, combined with poor economic policies, resulted in some of the most disastrous inflation ever experienced in a modern economy, leading to the relative decline of German scientific hegemony even before the Nazis came to power in 1933.5 Of course, Germany’s loss of status as a major power contributed to the emergence of the Nazi Party in the 1920s. Yet even in the midst of the decline of a national scientific hegemon, scientific creativity may still occur in particular centers, as was clearly the case at the University of Göttingen in the 1920s. (See Table 3.) In that decade Göttingen became one of the most creative universities in the natural sciences of the entire twentieth century, encouraged by a high degree of communication among excellent scientists in diverse fields. Working there at the time were the internationally distinguished mathematicians Richard Courant, Edward Landau, and David Hilbert, as well as chemists Walther Nernst, Adolf Windaus, and Richard Zsigmondy, all three of whom received Nobel Prizes for work done mostly at Göttingen. Others at Göttingen were considered to be among the world’s most creative scientists in their fields during the 1920s: Heinrich Johann Tammann in physical chemistry, Wilhelm Stille in geology, Otto Mügge and Victor Moritz Goldschmidt in mineralogy, Hans Kienle in astronomy and astrophysics, and Ludwig Prandtl, the father of modern aerodynamic theory. But in physics Göttingen excelled the most. Two Göttingen professors of physics became Nobel laureates: James Frank in experimental physics and Max Born in theoretical physics. Among Born’s doctoral students or assistants were Wolfgang Pauli, Werner Heisenberg, Maria Goeppert-­Mayer, Enrico Fermi, Pascual Jordan, Friedrich Hund, Erich 33

j. ro gers hollingsworth and david m. gear

Hückel, Lothar Nordheim, Léon Rosenfeld, Vladimir Fock, and Egil Hylleraas. The first four later received Nobel Prizes in physics, and all the others became world leaders in theoretical physics. Max Delbrück received his doctorate under the direction of Born and would later also receive a Nobel Prize in physiology or medicine, for work in phage Table 3. Distinguished Scientists at Göttingen University, Germany, in the 1920s Physicists Max Born (N) James Franck (N) Robert Pohl Doctoral Students or Assistants of Max Born Max Delbrück (N) Enrico Fermi (N) Vladimir Fock Maria Goeppert-­Mayer (N) Werner Heisenberg (N) Erich Hückel Friedrich Hund Egil Hylleraas Pascual Jordan Lothar Nordheim Robert Oppenheimer Wolfgang Pauli (N) Léon Rosenfeld

Others Spending Time in Göttingen’s Physics Institute Niels Bohr (N) Paul Dirac (N) John von Neumann Edward Teller Eugene Wigner (N) Mathematicians Richard Courant David Hilbert Edward Landau Chemists Walther Nernst (N) Heinrich Tammann Adolf Windaus (N) Richard Zsigmondy (N) Geologist Wilhelm Stille

34

Mineralogists Victor Moritz Goldschmidt Otto Mügge Astronomer and Astrophysicist Hans Kienle Aerodynamicist Ludwig Prandtl (N) = Nobel Prize winner

th e rise and decline of hegemonic systems

genetics. Robert Oppenheimer received his doctorate in physics at Göttingen, also under the direction of Born. Others who spent varying periods of time in Göttingen’s physics institute during the 1920s were two more future Nobel laureates, Paul Dirac and Eugene Wigner, and John von Neumann and Edward Teller.6 For a short period of time during the mid-­1920s, Göttingen was clearly the world’s most creative center in quantum theory, but shortly thereafter multiple creative centers emerged in the same field: Copenhagen (under Niels Bohr), Paris (under Louis de Broglie and Paul Langevin), Munich (under Arnold Sommerfeld), Zurich (under Erwin Schrödinger), and Cambridge (under Paul Dirac). Göttingen’s star went into total eclipse following Adolf Hitler’s ascent to power in 1933. Nazi Germany and later the Soviet Union represent cases of societies that became political and military hegemons without becoming scientific hegemons, suggesting that a scientific hegemon is unlikely to emerge in a society under totalitarianism.

British Hegemony By the early twentieth century, the world hub of scientific creativity was beginning to shift from Germany to Britain. The United Kingdom had long been an economic, political, and military hegemon—­with its colonial power extending across the world, and the world’s most powerful navy. English now slowly replaced German as the leading scientific language. From the beginning of the twentieth century until the Second World War, funding for British science came from both government and industry, and the university system became increasingly creative, especially in Cambridge. The United Kingdom soon boasted a remarkable number of Nobelists, recognized for their creativity, a large majority of whom did their scientific work at Cambridge. Thirty-­ seven Nobel Prizes were awarded to scientists for work done in Britain before 1950—­far more Nobel Prizes than were won in any other country during the same half-­century (see Table 4). 35

j. ro gers hollingsworth and david m. gear

Table 4. Thirty-­Seven Scientists Awarded a Nobel Prize for Work Done in Britain Prior to 1950 Physicists Lord Rayleigh (1904) J. J. Thomson (1906) Ernest Rutherford (1908) William Bragg (1915) Lawrence Bragg (1915) Charles Barkla (1917) C. T. R. Wilson (1927) Owen Richardson (1928) Paul Dirac (1933) James Chadwick (1935) George Thomson (1937) Edward V. Appleton (1947) Patrick M. S. Blackett (1948) Cecil Frank Powell (1950) John Cockcroft (1951) Ernest Walton (1951)

Chemists

William Ramsay (1904) Frederick Soddy (1921) Francis Aston (1922) Arthur Harden (1929) W. Norman Haworth (1937) Robert Robinson (1947) A. J. P. Martin (1952) R. L. M. Synge (1952) Cyril Hinshelwood (1956) Alexander Todd (1957) Frederick Sanger (1958)

Biological Scientists Ronald Ross (1902) A. V. Hill (1922) Frederick Hopkins (1929) Charles Sherrington (1932) Edgar Adrian (1932

Henry Dale (1936) Ernst Chain (1945) Alexander Fleming (1945) Howard Florey (1945) Hans Krebs (1953)

36

th e rise and decline of hegemonic systems

How Britain emerged as a scientific hegemon requires a focus on Cambridge, where the university produced more major scientific discoveries in this period than any university anywhere.7 Of course, Cambridge’s strength in science extended over several hundred years, with former students including Francis Bacon, Isaac Newton, and Charles Darwin. However, the catalyst for Cambridge’s modern scientific prominence was the British realization in the latter part of the nineteenth century that Germany was rapidly becoming a great power, and that among the most important factors contributing to this status was the German education system, particularly its research universities. In response to this, the U.K. government began to spur the universities at Oxford and Cambridge to place greater emphasis on scientific research. At Cambridge the physics of James Clerk Maxwell and the founding of the Cavendish Laboratory in 1871, with Maxwell as its first director, expressed this new emphasis. The Cavendish Lab was the nucleus of Cambridge physics in the late nineteenth and early twentieth centuries. During his life, Maxwell was recognized as one of the world’s leading scientists, and today many would list him among the fifty most important scientists of all time.8 Maxwell’s successor was Lord Rayleigh (John William Strutt), who became one of the most renowned physicists of the late nineteenth century and was the fourth recipient of the Nobel Prize in physics. Rayleigh was followed as director of the Cavendish by several other outstanding physicists, each of them in due course a Nobel laureate: J. J. Thomson, Ernest Rutherford, Lawrence Bragg, and Neville Mott—­a world record for a single laboratory.9 Yet the Cavendish directors’ Nobel Prizes represent only a small part of the extraordinary achievements of scientists working at the laboratory. From the turn of the century to 1937 (the death of Rutherford), the following individuals did all or part of their work for which they received a Nobel Prize while they were working at the Cavendish: Lawrence Bragg, Francis Aston, C. T. R. Wilson, Owen Richardson, James Chadwick, George Thomson, Patrick Blackett, John Cockcroft, 37

j. ro gers hollingsworth and david m. gear

Ernest Walton, and Pyotr Kapitza. Between the beginning of Bragg’s tenure as director in 1938 and his departure from Cambridge in 1953, the list of Cavendish-­based Nobel Prize winners continues: Francis Crick, James D. Watson, Max Perutz, John Kendrew, and Martin Ryle. In addition, Paul Dirac was awarded a Nobel Prize for work in physics he conducted at Cambridge. Table 5 lists the scientists who did some or all of their Nobel Prize–winning work at the Cavendish Laboratory. Table 5. Nobel Prize Winners Who Did Some or All of Their Work at the Cavendish Laboratory, Cambridge, Prior to 1960 Lord Rayleigh (1904) J. J. Thomson (1906) Ernest Rutherford (1908) Lawrence Bragg (1915) Francis Aston (1922) C. T. R. Wilson (1927)

Owen Richardson (1928) James Chadwick (1935) George Thomson (1937) Patrick M. S. Blackett (1948) John Cockcroft (1951) Ernest Walton (1951)

Francis Crick (1962) James D. Watson (1962) Max Perutz (1962) John Kendrew (1962) Martin Ryle (1974) Pyotr Kapitza (1978)

Numerous other Cambridge-­based scientists of considerable distinction were part of the greater Cavendish physics community. These included Rutherford’s son-­in-­law, Ralph Fowler, whose research expertise included pure mathematics, statistical mathematics, astrophysics, quantum theory, thermodynamics, and fundamental theories of semiconductors. There was also Arthur Eddington, one of the major astrophysicists of the twentieth century, and J. D. Bernal, one of the most influential crystallographers of the first half of the twentieth century. Linus Pauling referred to Bernal as more creative than “any other living man,” and “one of the greatest intellectuals of the twentieth century.”10 38

th e rise and decline of hegemonic systems

No department has ever had so many distinguished scientists as the Cavendish Laboratory. Indeed, this single department has received more Nobel Prizes for work actually done at the Cavendish than all of France’s and Italy’s science Nobel laureates combined. Yet the distinction of the Cavendish was only the tip of the iceberg of the greatness of British science in the first half of the twentieth century. As a whole, British science was clearly the global scientific hegemon during the first half of the twentieth century. With the decline of the British Empire during and after the Second World War, Britain’s power as a scientific hegemon also diminished. However, the British case in science is very different from that of the French after the decline of its political and military hegemony in the nineteenth century. In the United Kingdom, science continued to be quite strong, unlike in France, probably because the country’s political system remained relatively democratic and not highly centralized. British science continued to be highly creative for the rest of the twentieth century. More than two dozen Nobel Prizes were awarded to British scientists up to 2000 for work begun after 1950—­more than in any other country after the United States. While the Nobel Prize is only one indicator of scientific creativity, this U.K. record is undoubtedly significant.

American Hegemony and Scenarios for the Future By the end of the Second World War, the United States had picked up the baton in science and still holds it. The United States emerged from the war as the world’s dominant economic, political, and military power, which facilitated its dominance in scientific creativity. Since then, American scientists have received more than half of the most prestigious awards in science, such as the Nobel, Lasker, Horwitz, and Crafoord prizes—­major indicators of high levels of scientific creativity. For many years U.S. researchers have dominated scientific journals, accounting for approximately 30 percent of all published papers and more than 50 percent of the top 1 percent of the most-­cited papers. The 39

j. ro gers hollingsworth and david m. gear

United States has also attracted large numbers of young scientists for advanced training, recalling the migration of thousands of Americans to German universities and the flow to Britain of scientists from the British Empire in the later nineteenth century and after; and English has remained the world’s dominant scientific language. But history suggests that the United States has no cause for complacency about its future levels of scientific creativity. Patterns in the rise and decline of scientific hegemons suggest that the United States could eventually look back on the early twenty-­first century as the peak of its scientific dominance. Each former giant of scientific creativity emerged when the society’s economy became extraordinarily robust by world standards. As the French, German, and British economies declined, so did their science systems. Each former scientific power, especially during the initial stages of decline, had the illusion that its system was performing better than it actually was, overestimating its strength and underestimating innovation elsewhere. The elite could not imagine that the center would shift.

What Is the State of Scientific Creativity in the United States? Since 1945, the number of scientific papers and journals in highly industrialized societies—­particularly the United States—­has risen almost exponentially, while the proportion of the workforce in research and development and the percentage of gross national product devoted to it have grown more modestly. Yet the rate at which truly creative work emerges has remained relatively constant. In terms of the scale of research efforts to make major scientific breakthroughs, there are diminishing returns. Meantime, the scale of science has changed in many areas, raising interesting questions about creativity at the individual level. The United States has led the way in the emergence of “big science” (e.g., the Manhattan Project, the Jet Propulsion Lab, and the Lawrence Livermore, 40

th e rise and decline of hegemonic systems

Argonne, and Brookhaven National Laboratories). Indeed, in many fields there has been a major shift from individual to team research. One of the virtues of large-­scale science is the ability to organize sizeable groups with different skills, ideas, and resources. Teams produce many more scientific papers than individuals, leading to a boom in scientific publishing. In recent decades, the average number of authors per paper has more than doubled. Moreover, team-­authored papers are 6.3 times more likely to receive at least 1,000 citations. However, as scientific creativity is achieved primarily by individuals, measures of performance at the collective level pose difficult problems for assessing levels of creativity. This leads us back to the first paragraph of this essay: What is the right level of analysis for measuring creativity? In some fields, the transformation toward big science has built in irreversible constraints for organizing scientific research. During the past half-­century, research universities, research institutes, and research-­ oriented pharmaceutical companies have dramatically increased in number. Many universities, especially in the United States, have become increasingly bureaucratic and fragmented, with numerous huge departments, organized like silos, impeding communication across fields. The number of postdocs, research assistants, and technicians has mushroomed. To manage large scientific organizations, multiple levels of management have developed, with leaders of subgroups, chairs of departments, associate deans, deans of colleges, provosts for academic affairs, chancellors, and vice presidents for research, business affairs, and legal affairs. In some respects, the research segments of many U.S. universities have become like holding companies, with universities glad to have the staff as long as they can bring in large research grants and pay substantial institutional overhead costs. However, granting agencies and universities, realizing that this kind of structure has become dysfunctional, have made efforts to reduce the number of managerial levels and to develop matrix-­type teams to minimize organizational rigidities. But organizational inertia hampers these reform efforts. The ballooning of publications has meant that universities, funding 41

j. ro gers hollingsworth and david m. gear

agencies, and reviewers have less time to evaluate scientific output—­ that is, publications—­carefully, and have come to rely more and more on quantitative measures based on citation statistics to assess creativity. The creativity of individual scientists is measured more and more by the number of papers on which the scientist is listed as a participant in the research and on how much research funding the scientist has generated. At the same time, the increasing commercialization of science has tended to emphasize short-­term scientific horizons. All these trends pose serious problems for the future creativity of U.S. science. As funding agencies and leaders in the scientific community increase the incentives to commercialize science, the system risks losing its flexibility and diminishes its capacity to make major, fundamental discoveries that may become the basis for new technologies to emerge decades after the discoveries. As is well known, new knowledge (e.g., major discoveries) often appears in an unanticipated and unplanned process with unpredictable consequences.

Is It Possible to Alter the Dynamics? Our research on more than three hundred major discoveries in basic biomedical science in Britain, France, Germany, and the United States since 1900 demonstrates that a large percentage of the highest scientific creativity occurred in organizational contexts having the characteristics described in Table 6.11 The few organizations where major breakthroughs occurred again and again were relatively small; they had high autonomy, flexibility, and the capacity to adapt rapidly to the fast pace of change in the global environment of science. Such organizations tended to have moderately high levels of scientific diversity and internal structures that facilitated the communication and integration of ideas across diverse scientific fields. Most of these organizations had scientific leaders with a keen scientific vision of the direction in which new fields in science were heading, a strategy for recruiting scientists capable of moving a research agenda in that direction, and the ability 42

th e rise and decline of hegemonic systems

Table 6. Characteristics of Organizational Contexts That Facilitate Major Discoveries* 1. Organizational leadership with a. The capacity to understand the direction in which scientific research is moving. b. The strategic vision to integrate diverse areas and provide focused research. c. The ability to secure funding for these activities. d. The capacity to recruit individuals who can confront important scientific problems that can be solved. e. The capacity to provide rigorous criticism in a nurturing environment. 2. Moderately high scientific diversity—­for example, organizational contexts (entire organizations, departments) with a variety of biological disciplines, medical specialties, and subspecialties, and numerous people in the biological sciences who have research experience in different disciplines or paradigms. 3. Strong communication and social integration in the organization, so that scientists from different scientific fields come together with frequent and intense interaction in collective activities such as joint publications, journal clubs and seminars, team teaching, meals, and other informal activities. 4. The capacity to recruit individual scientists who internalize a moderately high degree of scientific diversity. 5. Organizational autonomy and organizational flexibility. The organizational context of research is relatively independent of its institutional environment, and the organizational context is flexible enough to shift rapidly from one area of science to another. If the organizational context is part of a larger organization, it can enjoy autonomy and flexibility only if it is loosely coupled both to the larger organization and to the institutional environment. * These characteristics are derived from an in-­depth analysis of the organizational contexts in which major discoveries occurred, or did not occur, in the twentieth century.

43

j. ro gers hollingsworth and david m. gear

to nurture young scientists while socializing them to accept the highest standards of scientific excellence. Dozens of scientists who made significant advances did so in an organizational context with fewer than fifty full-­time researchers. In the recent past, some of the most creative work in basic biomedical science occurred in relatively small centers such as the Rockefeller University in New York; the Salk Institute in San Diego, California; the Basel Institute for Immunology in Switzerland; the Laboratory of Molecular Biology in Cambridge, U.K.; and various Max-­Planck Institutes in Germany.12 Since 1998 several Nobel Prizes have been awarded to scientists for work done in relatively small U.S. institutions: Günter Blobel (physiology or medicine), Ahmed Zewail (chemistry), Paul Greengard (physiology or medicine), Andrew Fire (physiology or medicine), Roderick MacKinnon (chemistry), and Gerhard Ertl (chemistry). Table 7 presents the characteristics that place constraints on the ability of organizations to make major discoveries. Most large universities in the United States, France, Germany, and the United Kingdom have tended to show the characteristics described in Table 7: differentiation into large numbers of scientific disciplines, less communication across scientific disciplines compared to that in small organizations, and less organizational autonomy and flexibility to adapt to the fast pace of scientific change. Why would organizations able to facilitate communication across diverse fields and, thus, to integrate scientific diversity, have an advantage in making major discoveries over those that have a lower capacity for such communication and integration? In our study of major discoveries, every single one reflected a great deal of scientific diversity. Of course, very good science occurs in organizational environments highly specialized within a very narrow field where there is little connection across disciplines and subspecialties. But the science produced in such narrow and specialized environments tends to reflect insufficient diversity to be recognized as a major discovery by the scientific 44

th e rise and decline of hegemonic systems

Table 7. Characteristics of Organizational Contexts That Constrain Discoveries* 1. High differentiation means that sharp boundaries exist between subunits such as basic biomedical departments and other subunits such as departments, divisions, or colleges, which control recruitment and have responsibility for extramural funding. 2. Hierarchical authority requires centralized decision making about research programs, numbers of personnel, work conditions, or budgetary matters. 3. Bureaucratic coordination imposes a high degree of standardization of rules and procedures. 4. Hyperdiversity prevents effective communication among researchers in different fields or even in similar fields. * These characteristics are derived from an in-­depth analysis of the organizational contexts in which major discoveries occurred, or did not occur, in the twentieth century.

community, with its vast array of different disciplines. Nonetheless, major breakthroughs did occur in the type of organizational context described in Table 7—­but only when the individual laboratory making the discovery was structured quite differently from most laboratories in that type of organizational context. In other words, the lab was headed by a scientist operating in an organizational environment that generally would not be expected to produce a major discovery.13 If the past is a guide for the future, America’s science system could enhance its performance—­particularly in basic biomedical science, but in other fields as well—­by creating several dozen small research organizations in interdisciplinary domains or in emerging fields, modeled along the lines of the organizations mentioned above. In recent years, there have been several such efforts, for instance, the Howard Hughes Medical Institute’s Janelia Farm in Virginia; the Santa Fe Institute in New Mexico; the Institute Para Limes in Warnsveld, the Netherlands; and the new Institute for Quantum Optics and Quantum Information 45

j. ro gers hollingsworth and david m. gear

in Vienna. Each of these small institutes has strong links with other organizations and a continuing group of visiting scientists. What is envisioned here are small institutes, each the hub in a network: a variation on the practice a few decades ago of the Woods Hole Oceanographic Institution, Cold Spring Harbor Laboratory, or the Salk Institute with its nonresident fellows consisting of a stable of future Nobel laureates (e.g., Francis Crick, Torsten Wiesel, David Hubel, Jacques Monod, and Gerald Edelman).

Perspectives on the Future The decline of the U.S. economy relative to the rest of the world is facilitating the strengthening of science elsewhere. An evolving multipolar world economy is leading to multiple centers of science—­the United States, the European Union, Japan, China, Russia, and possibly India. The increasing wealth of several of these societies is enabling them to lure back many younger scientists trained abroad in the world’s leading institutions.14 A remarkable change has been the emergence of China as an important power in science. For example, China was fourteenth in the world in production of science and engineering papers in 1995; by 2005, as the Chinese economy boomed, it was fifth, according to Thomson Reuters ISI; and by 2007 it was second. Between 1994 and 2008, the number of natural sciences and engineering doctoral degrees awarded in China increased tenfold, so that by 2007 China had surpassed the United States for the largest number awarded in the world. Moreover, in recent years more and more senior expatriate scientists have been returning to China. Of course, such indicators tell us little about highly creative achievement at the level of the individual—­but they do have implications for the future trends in creativity in China. As we reflect on the future of U.S. scientific creativity, there are several possible scenarios to consider. One possibility is that the American system will continue to perform extraordinarily well with a continued 46

th e rise and decline of hegemonic systems

exponential growth in the number of research articles and journals. A second scenario is similar to what is occurring in the world of business. Just as firms are becoming increasingly globalized, the organization of science will also become more global, with scientists having greater mobility, moving back and forth among labs in the United States, the United Kingdom, Germany, Singapore, China, Australia, India, Scandinavia, and so forth. While certain geographical locations will remain stronger than others, the structure of research organizations and laboratories, and their performance across major regions of the world, will increasingly converge. A third scenario is that commercialization of science will increasingly take place, to the long-­term detriment of fundamental major discoveries, as research organizations become excessively concerned with pecuniary gain and with short-­term scientific horizons. But the successful functioning of an advanced industrial society depends on an abundant flow of fundamental basic knowledge. Such fundamental knowledge has unintended consequences. As suggested above, it often takes three or four decades, or even longer, before a fundamental discovery has an economic payoff. The X-­ray crystallographic work of William and Lawrence Bragg, for which they were awarded a Nobel Prize in 1915, which was followed by much more work in crystallography by others, is only now being used for advances in drug discovery. Similarly, the pathbreaking work of Oswald Avery in 1944 about genes and DNA and the discovery of the structure of DNA by Crick and Watson in the early 1950s are now—­a half-­century later—­having significant consequences for the biotech industry. Indeed, the work of the three Nobel Prize winners in physiology or medicine announced in 2007 was very dependent on the work of Avery, Crick, and Watson more than a half century ago. The fourth scenario is the one suggested above, in which scientists make major discoveries in relatively small organizational settings. This is not to suggest that only one type of organization is suitable for fundamental discoveries. We do need to be mindful, however, 47

j. ro gers hollingsworth and david m. gear

that excellence in science can still occur on a very small scale. This is true not only in the basic biomedical sciences but also in the world of physics, which some tend to think can flourish only as “big science.” Even in physics, excellent science is still occurring within small groups, often consisting of only one or two senior investigators plus two or three young assistants. Speaking of his own field, the physicist Per Bak argued some years ago that the dominance of large-­scale physics projects has ended.15 Consider some of the most recent Nobel Prize winners in physics whose work was done in relatively small settings: Klaus von Klitzing for his work on the quantum Hall effect in semiconductors (1985); Gerd Binnig and Heinrich Rohrer of the IBM Labs in Zurich for their design of the scanning tunneling microscope (1986); Alexander Muller and Georg Bednorz, also in Zurich, for their work on superconductivity in ceramic materials (1987); and Pierre-­Gilles de Gennes (1991) of the Collège de France for his discoveries in liquid crystals and polymers—­followed by a number of other Nobel Prizes toward the end of the twentieth century also involving small-­scale science. But it is not only Nobel laureates in physics who have been able to do excellent work on a low budget. For example, at the relatively small Rockefeller University, two very creative physicists, Mitchell Feigenbaum and Albert Libchaber, have done much of their most creative work alone in fields related to chaos theory and small-­scale fluid experiments. There is no certainty that the U.S. system of science is in decline. But in our judgment, for the system to continue to make fundamental discoveries and flourish relative to other major centers of science, it must have organizations with a high degree of flexibility and autonomy, in which scientists can have intense interactions with one another across diverse fields. American society has the potential to develop and maintain such organizational settings. If funding agencies and leaders in the scientific community do not recognize the necessity of placing limits on the commercialization of science with its associated large-­ scale research environments, the system risks losing its flexibility and

48

th e rise and decline of hegemonic systems

its capacity to make fundamental new discoveries without regard to their immediate applicability. A number of studies sponsored by the National Science Foundation and other funding agencies have demonstrated that more than half of the major technological innovations in the twentieth century resulted from fundamental science, that is, science conducted without regard for its usefulness. While no one knows what the proper balance should be between fundamental research and applied research, we should reflect on the possibility that without a strong commitment to fundamental research, American society may have a dearth of fundamental new knowledge to draw upon for new applications some forty or fifty years down the road.

Acknowledgments The authors wish to acknowledge the contributions and assistance of Ellen Jane Hollingsworth and Karl H. Müller, and information obtained from interviews with the following scientists: Baruch S. Blumberg, Günter Blobel, Francis Crick, Gerald Edelman, Andrew Huxley, Aaron Klug, Roderick MacKinnon, Paul Nurse, and Fred Sanger.

Notes 1. Henry E. Guerlac, “Science and French National Strength,” in Modern France: Problems of the Third and Fourth Republics, ed. Edward Mead Earle (New York: Russell and Russell, 1964), 81–105; and Christine Sinding, “Claude Bernard and Louis Pasteur: Contrasting Images through Public Commemorations,” Osiris 14 (1999): 61–85. 2. Spencer R. Weart, Scientists in Power (Cambridge, MA: Harvard University Press, 1979), 26. 3. Terry Shinn, “The French Science Faculty System, 1808–1914,” Historical Studies in the Physical Sciences 10 (1979): 271–332. 4. Joseph Ben-­David, “Scientific Productivity and Academic Organization in Nineteenth-­Century Medicine,” American Sociological Review 25 (1960): 828–43; M. J. Nye, “Recent Sources and Problems in the History of French

49

j. ro gers hollingsworth and david m. gear Science,” Historical Studies in the Physical Sciences 13 (1983): 401–15; and M. J. Nye, “Scientific Decline: Is Quantitative Evaluation Enough?” Isis 75 (1984): 697–708. 5. Alan Beyerchen, Scientists under Hitler: Politics and the Physics Community in the Third Reich (New Haven, CT: Yale University Press, 1977). 6. Max Born, My Life: Recollections of a Nobel Laureate (London: Taylor & Francis, 1978); and David Nachmansohn, German-­Jewish Pioneers in Science, 1900–1933: Highlights in Atomic Physics, Chemistry and Biochemistry (Berlin: Springer, 1979). 7. J. Rogers Hollingsworth and Ellen Jane Hollingsworth, Fostering Scientific Excellence: Organizations, Institutions, and Major Discoveries in Biomedical Science (Cambridge: Cambridge University Press, forthcoming). 8. Melanie Mitchell, Complexity: A Guided Tour (Oxford: Oxford University Press, 2009), 43. 9. J. G. Crowther, The Cavendish Laboratory, 1874–1974 (New York: Science History Publications, 1974); and Peter Harman and Simon Mitton, eds., Cambridge Scientific Minds (Cambridge: Cambridge University Press, 2002). 10. Andrew Brown, Bernal: The Sage of Science (Oxford: Oxford University Press, 2005), 473, 485. 11. J. Rogers Hollingsworth and Ellen Jane Hollingsworth, Major Discoveries, Creativity, and the Dynamics of Science (Vienna: Edition echoraum, 2011). 12. J. Rogers Hollingsworth, “Institutionalizing Excellence in Biomedical Research: The Case of Rockefeller University,” in Creating a Tradition of Biomedical Research: The Rockefeller University Centennial History Conference, ed. Darwin H. Stapleton (New York: Rockefeller University Press, 2004), 17–63. 13. J. Rogers Hollingsworth, “A Path Dependent Perspective on Institutional and Organizational Factors Shaping Major Scientific Discoveries,” in Innovation, Science, and Institutional Change: A Research Handbook, ed. Jerald Hage and Marius Meeus (New York: Oxford University Press, 2006), 423–42; and J. Rogers Hollingsworth, “High Cognitive Complexity and the Making of Major Scientific Discoveries,” in Knowledge, Communication and Creativity, edited by Arnaud Sales and Marcel Fournier (London: Sage, 2007), 129–55. 14. J. Rogers Hollingsworth, Karl H. Müller, and Ellen Jane Hollingsworth, “The End of the Science Superpowers,” Nature 454 (2008): 412–13. 15. Per Bak, How Nature Works (New York: Copernicus, 1996).

Bibliography Bak, Per. How Nature Works. New York: Copernicus, 1996. Ben-­David, Joseph. Centers of Learning: Britain, France, Germany, United States. New York: McGraw-­Hill, 1977.

50

th e rise and decline of hegemonic systems —­—­—­. “Scientific Productivity and Academic Organization in Nineteenth-­ Century Medicine.” American Sociological Review 25 (1960): 828–43. Beyerchen, Alan. Scientists under Hitler: Politics and the Physics Community in the Third Reich. New Haven, CT: Yale University Press, 1977. Born, Max. My Life: Recollections of a Nobel Laureate. London: Taylor & Francis, 1978. Brown, Andrew. Bernal: The Sage of Science. Oxford: Oxford University Press, 2005. Crowther, J. G. The Cavendish Laboratory, 1874–1974. New York: Science History Publications, 1974. Guerlac, Henry E. “Science and French National Strength.” In Modern France: Problems of the Third and Fourth Republics, edited by Edward Mead Earle, 81–105. New York: Russell and Russell, 1964. Harman, Peter, and Simon Mitton, eds. Cambridge Scientific Minds. Cambridge: Cambridge University Press, 2002. Hollingsworth, J. Rogers. “Factors Associated with Scientific Creativity.” Euresis Journal 2 (2012): 77–112. —­—­—­. “High Cognitive Complexity and the Making of Major Scientific Discoveries.” In Knowledge, Communication and Creativity, edited by Arnaud Sales and Marcel Fournier, 129–55. London: Sage, 2007. —­—­—­. “Institutionalizing Excellence in Biomedical Research: The Case of Rockefeller University.” In Creating a Tradition of Biomedical Research: The Rockefeller University Centennial History Conference, edited by Darwin H. Stapleton, 17–63. New York: Rockefeller University Press, 2004. —­—­—­. “A Path Dependent Perspective on Institutional and Organizational Factors Shaping Major Scientific Discoveries.” In Innovation, Science, and Institutional Change: A Research Handbook, edited by Jerald Hage and Marius Meeus, 423–42. New York: Oxford University Press, 2006. —­—­—­, and Ellen Jane Hollingsworth. Fostering Scientific Excellence: Organizations, Institutions, and Major Discoveries in Biomedical Science. Cambridge: Cambridge University Press, forthcoming. —­—­—­, with the assistance of David M. Gear. Major Discoveries, Creativity, and the Dynamics of Science. Vienna: Edition echoraum, 2011. Hollingsworth, J. Rogers, Karl H. Müller, and Ellen Jane Hollingsworth. “The End of the Science Superpowers.” Nature 454 (2008): 412–13. Mitchell, Melanie. Complexity: A Guided Tour. Oxford: Oxford University Press, 2009. Nachmansohn, David. German-­Jewish Pioneers in Science, 1900–1933: Highlights in Atomic Physics, Chemistry and Biochemistry. Berlin: Springer, 1979. Nye, M. J. “Recent Sources and Problems in the History of French Science.” Historical Studies in the Physical Sciences 13 (1983): 401–15.

51

andrew rob inson —­—­—­. “Scientific Decline: Is Quantitative Evaluation Enough?” Isis 75 (1984): 697–708. Shinn, Terry. “The French Science Faculty System, 1808–1914.” Historical Studies in the Physical Sciences 10 (1979): 271–332. Sinding, Christine. “Claude Bernard and Louis Pasteur: Contrasting Images through Public Commemorations.” Osiris 14 (1999): 61–85. Weart, Spencer R. Scientists in Power. Cambridge, MA: Harvard University Press, 1979.

52

cha p t er t wo

Exceptional Creativity in Physics Two Case Studies—­Niels Bohr’s Copenhagen Institute and Enrico Fermi’s Rome Institute Gino Segrè

Creativity in science and technology is both common and necessary, although often it takes place at times and in ways that are not readily recognized. Exceptional creativity is of course more unusual and, when it occurs, is sometimes only identified with hindsight. However, I try here to identify circumstances that foster such activity. I then go on to discuss how these circumstances are displayed in physics research, concentrating for illustrative purposes on two particular historical examples: the workings of the Copenhagen Institute for Theoretical Physics (now renamed the Niels Bohr Institute) during the 1920s, and that of Enrico Fermi’s group at the University of Rome during the 1930s. Although generalizing is dangerous, I believe one can characterize exceptional creativity by looking for responses to the following five sets of questions: 1. The Times. What area has evolved to the point where exceptional intellectual challenges or their applications can significantly alter the way people think, live, and work? What is the opportune time for such areas to be recognized and how is it done? 2. The Places. In what kind of institutions can one address these 53

gino segré

challenges? How are they organized, and how is creativity nurtured in them? What can be done to further their forward momentum? 3. The Environment. What is the role of the surrounding milieu in creating an atmosphere in which exceptional endeavors emerge? What sort of encouragement is beneficial, and how important is it to have a variety of backgrounds and training? How important is freedom of inquiry in promoting development internally and externally? 4. Mentoring. What role do mentors play in helping young talent develop, and at what stages is it most important? In what ways do mentors need to be involved in the research in question? What role should mentors play in administration? 5. The Individual. Who are the exceptional individuals who can respond to these challenges, and how are they identified? How can such individuals be encouraged and supported? Finally, it is worth exploring how an institution that has been at the center of exceptional creativity can transform itself as it adapts to changing times and interests.

Niels Bohr and the Copenhagen Institute for Theoretical Physics The Times In 1913 Niels Bohr joined together two important threads of physics research, one based on theory and the other on experimental observations. The first, quantum theory, followed from Max Planck’s revolutionary 1900 conjecture that energy is emitted and absorbed in discrete quanta and from Albert Einstein’s brilliant 1905 insight that light, or more generally electromagnetic radiation, has a dual nature manifested as both waves and particles. The second was Ernest Rutherford’s experimental discovery in 1909–1911 that the atom was to be viewed as essentially a great void with a tiny massive nucleus in its center. A “fly in the cathedral” or a “gnat in Albert Hall” are phrases

54

excep tional creativit y in physics

sometimes used to convey to the general public the relative sizes of the nucleus and the atom. Bohr’s picture of the atom departed radically from classical physics by having electrons circle the nucleus, emitting or absorbing energy only when they jump from one orbit to another, whereas, according to the rules of classical physics, circling electrons radiate energy continuously, quickly spiraling down to the core. Another major departure from classical physics was the introduction of the notion that only certain orbits were allowed for electron motion, their size determined by quantum rules. The model Bohr proposed explained many puzzling features in atomic behavior, with a particularly dramatic fit to experimental information obtained from the study of atoms with only a single electron, hydrogen, and ionized helium. These sudden and unexpected successes made it clear that certain of the theory’s elements would need to be incorporated into any future theory. On the other hand, its failures made it equally clear that much more was needed before there could be a satisfactory quantum theory of matter. After the appearance of the Bohr model, understanding the atom became increasingly possible. Researchers now thought that the time was ripe for attacking this problem and believed that working on it was necessary for understanding the structure of matter. The subject of atomic physics became a central research area of physics, with Bohr as a guiding figure in its development. A physics revolution had begun. The First World War interrupted its progress, but after that, progress came about rapidly.

The Place In 1916 Bohr was awarded a theoretical physics professorship, Denmark’s first appointment in the field. Unfortunately, very few resources were attached to the post: Bohr initially shared a tiny office with a secretary and Hendrik Kramers, his first assistant/coworker. But almost

55

gino segré

immediately upon taking up his professorship, Bohr began planning an institute that would welcome young physicists from around the world. His idea was to gather in Copenhagen a community of physicists who would live and work there for periods from days to years. In early 1917 he petitioned the university for funds to build his dream institute, outlining in his proposal both the scope of such an undertaking and its desirability. Bohr also underlined that the state of affairs in atomic physics was such that it had become necessary for theorists to provide guidance to experimenters in their work. In other words the hitherto-­accepted mode of experiment leading to theory might need to be reversed in this case. Recognizing, however, the importance of theorists and experimentalists working together, Bohr also began to contemplate having an experimental physics component in the new institute. Raising the necessary funds was no easy task, particularly in light of the postwar financial depression gripping Europe, but Bohr was tireless. Much of the financing came from private sources, inspired to contribute by Bohr’s zeal and earnestness. Land was bought, construction began, and in March 1921 the Universitets Institut for Teoretisk Fysik was inaugurated. During the process, Bohr was involved in all aspects of the planning, building, and fund-­raising. Before the Bohr Institute, a true theoretical physics community had not existed. Its continued influence on the field was profound. The young might begin their studies in Munich, Göttingen, Leipzig, Zurich, or elsewhere, but their residence afterward in Copenhagen would prove formative for their careers, and the associations they made there were important for the rest of their professional lives.

The Environment From the very start, Bohr set a tone of informality at his institute. Living as he did at first directly above and later adjacent to where visitors worked, Bohr followed their progress and met often with them, sometimes even accompanying them to the movies or playing sports 56

excep tional creativit y in physics

together. No distinction of age or rank was present. Visitors were astonished to see the twenty-­one-­year-­old Lev Landau insisting that the forty-­five-­year-­old Bohr was wrong and telling him to be quiet and listen. This tone was totally different from the hierarchical structure then prevalent in universities. But Bohr thrived on such give-­and-­ take and even enjoyed it. All that mattered was one’s commitment to physics. This kind of atmosphere came to be known succinctly as the Copenhagen Spirit. Anxious to renew their contacts after they had left, many young former Bohr Institute attendees returned, even if only for a brief visit. In order to institutionalize these gatherings in Copenhagen, a scheduled weeklong meeting that had no agenda or proceedings was held every year after 1929. The idea was simply to have a chance to renew acquaintances and exchange ideas in an informal setting. Unlike any other physics meeting held at the time, it, too, soon became a new Bohr Institute tradition. Hierarchical relations between elder and younger scientists had been the norm. Bohr’s Institute provided a different, egalitarian model.

Mentoring Bohr had at an early stage of his career a short but very important mentoring experience that profoundly influenced the course of his research and probably shaped his desire to guide a multinational research institute. It took place during a four-­month stay in 1912 at Ernest Rutherford’s Manchester University laboratory, during the course of which Bohr observed how the older physicist interacted with his younger assistants and guided their research. The affection and respect that Bohr and Rutherford had for each other can be seen in their correspondence and the manifest regard of one for the other. This particular personal link was bolstered by the key role that Rutherford played as head of the Cavendish Laboratory in Cambridge. While Copenhagen was becoming a great center for theoretical physics, the Cavendish was the major world institution for 57

gino segré

experimental atomic and nuclear physics. The exchanges between these two great figures continued unabated until Rutherford’s tragic sudden death in 1937 after a hernia operation. Rutherford was a powerful mentor, but so was Bohr: his influence on the likes of Werner Heisenberg and Wolfgang Pauli is frequently acknowledged. In his memoirs, Heisenberg remembered his first meeting with Bohr, a three-­hour walk taken together in Göttingen after a lecture by Bohr, as follows: “That walk was to have profound repercussions on my scientific career, or perhaps it is more correct to say that my real scientific career only began that afternoon.” Heisenberg was just twenty years old at the time. Part of the extraordinary praise for Bohr is due to the personal concern he had for all the young physicists with whom he came into contact. He worked tirelessly to advance their careers, to find support for them through fellowships and faculty positions, as well as to protect those who were persecuted during the Nazi era. Direct mentoring need not be extended over a long time to have profound repercussions, but it seems most beneficial if contact can be maintained. Both Bohr and Rutherford adopted this model.

The Individual Most of the theoretical physicists working on problems of quantum physics during the 1925–1930 period were so young that the Germans nicknamed the subject Knabenphysik (boys’ physics). Wolfgang Pauli, Werner Heisenberg, Enrico Fermi, and Paul Dirac, all four born between 1900 and 1902, are the most famous, but a host of others emerged as well. Naming only Nobel Prize winners in physics, the list includes Hans Bethe, Felix Bloch, Louis de Broglie, Subrahmanyan Chandrasekhar, Lev Landau, Maria Goeppert-­Mayer, Nevill Mott, John Van Vleck, and Eugene Wigner. Others of their contemporaries, such as Pascual Jordan, Walter Heitler, Fritz London, George Gamow, Ettore Majorana, J. Robert Oppenheimer, Rudolf Peierls, and Victor Weisskopf, were of similar caliber. All of the above were theoretical 58

excep tional creativit y in physics

physicists and all, with the exception of De Broglie, were born between 1902 and 1910. Almost every one of the individuals listed above was a resident at the Copenhagen Institute for varying periods of time. They were influenced by the work done there, by the mentorship of Bohr, and by the stimulation of talking to each other. Although there is always a chicken-­and-­egg-­type argument about whether the individuals create the special circumstances or the circumstances attract the individuals, at least in this case there was a combination of the two. Most physicists agree that quantum mechanics, both for the profundity of the new ideas that were introduced and the wealth and importance of its applications, was the greatest physics revolution of the twentieth century. An extraordinary group of young theoretical physicists came to the Copenhagen Institute in the period between its founding and the Second World War, profoundly influencing the development of atomic, and later nuclear, physics.

Enrico Fermi and the Rome Institute The Times The major hurdles in understanding the behavior of atoms were overcome during the mid-­to late 1920s by a series of very significant advances. These include Pauli’s introduction of the Exclusion Principle in 1924, the development of matrix mechanics by Heisenberg and of wave mechanics by Erwin Schrödinger, the Copenhagen interpretation of quantum mechanics that included Heisenberg’s uncertainty principle, and finally Dirac’s joining of special relativity to quantum mechanics. By 1932, with a now-­satisfactory theory of how electrons surrounding the atomic nucleus behave, it was becoming increasingly clear that the nucleus’s structure was going to be the next frontier in the effort to study matter at ever-­smaller scales. Doing so would require solving a whole new class of problems. How was it possible that the nucleus, 59

gino segré

known to be composed of positively charged electric particles (protons), did not fly apart because of the repulsion of like charges? How did the nucleus manage to emit electrons in such a way that energy was apparently not conserved? How did those electrons come to be within the nucleus in the first place? It was also clear that progress would require significant new experimental data. Whereas the advances of the 1920s in atomic physics had been achieved with relatively little new input from experiment, data were now needed to make progress. This led to a closer collaboration between theoretical and experimental physicists. The year 1932 was the turning point, the year in which dramatic new experiments changed the face of subatomic physics. They include John Cockroft and Ernest Walton’s achieving nuclear disintegration, Ernest O. Lawrence’s construction of the first cyclotron, Harold Urey’s detection of deuterium, and Carl D. Anderson’s observation of antimatter. But the greatest immediate impact was due to James Chadwick’s discovery of the neutron, the neutral counterpart to the proton. This point marked the true beginning of nuclear physics, leading soon thereafter to understanding both how the nucleus was held together and how it sometimes decayed. A new field of research had been initiated, and clearly many exciting new developments would follow. With quantum mechanics providing an explanation for atomic structure, the time was finally ripe for examining the details of the nucleus’s structure. Enrico Fermi, who had great skills both as a theorist and later as an experimentalist, would become a key figure in advancing the subject, combining as he did in a single individual many of Bohr’s and Rutherford’s abilities. .

The Place As remarked earlier, Copenhagen became a significant center of physics research with the granting of a professorship to Bohr in 1916, although its effect on the larger community only began to blossom five or six years later with his establishment of the Institute for Theoretical 60

excep tional creativit y in physics

Physics. In a similar vein, Rome became a physics center of activity in 1925 with the appointment of Fermi to a chair of theoretical physics at the city’s university, though again the group’s influence would not be fully felt until the 1930s. Prior to Fermi’s arrival on the scene, Italy was known for having a world-­class presence in the field of mathematics, but also for being a relative backwater in physics research. However, there was at the time an older physics professor in Rome who had a vision of how to change this situation. His vision relied on building a group centered on a young, gregarious, hard-­working, and extraordinarily talented researcher. Orso Corbino found whom he was looking for in Fermi, and since Corbino was politically very well connected (he was Italy’s minister of education as well as being a physics professor), he managed to have Fermi appointed to a university chair at the age of twenty-­four, an unheard-­of promotion in Rome at that time. During the following decade, as the group around Fermi grew in importance, Corbino consistently paved the way for its constituents by obtaining funding for their research, helping the young garner university appointments, and continually encouraging them. In him we see an example of the older individual who selflessly aids a younger generation. The first to join the group were Italians, but by 1930 the reputation of the research and Fermi in particular was such that a number of prominent young visitors began to arrive from foreign countries, staying for varying periods of time. The funds for their stays were provided by their home governments or by fellowships such as those granted by the Rockefeller Foundation. The attractions of Italy’s capital contributed to the desirability of such a sojourn, but the chief attraction was Fermi and his coworkers. The main style in theoretical physics during the 1920s, as seen in the instruction provided by the leading German educators, Max Born in Göttingen and Arnold Sommerfeld in Munich, was a formal and highly mathematical one. Bohr, relying much more on intuition, brought a new approach. Fermi, who was largely self-­taught, had a different style. 61

gino segré

He believed in stripping problems to their essentials and then looking for a simple technique that would yield an approximate answer. Following this line, he became legendary for being able to estimate quickly the magnitude of any physical phenomenon, and in doing so he introduced a new kind of agility in performing calculations, one that was well suited to the emerging problems in nuclear physics. Rome under Fermi’s guidance became in the 1930s one of the world centers of physics research, marked by a new way of approaching problems, one that emphasized simple, approximate solutions.

The Environment The most striking feature of the physics environment in Rome was the relative youth of the participants gathered around Fermi. As in Copenhagen and perhaps even more so, it contributed to the communal atmosphere in which the physicists not only worked together, but had a common social life, oriented around hiking in the neighboring hills and vacationing together in the Alps. The coworkers also adopted playful nicknames, with Fermi being addressed as “the pope,” because of his supposed infallibility. Although Fermi’s initial reputation was built on his success as a theoretical physicist, he felt from the very beginning—­and Corbino supported him in this—­that he wanted Rome to have a strong presence in experimental physics as well. The same desire prevailed in Copenhagen, but Fermi’s emphasis in this direction was much more direct and forceful than Bohr’s. The plan was carried out by having the best of the young Roman experimentalists spend long periods in the world’s leading laboratories so that they could acquire proficiency in cutting-­edge techniques and bring their knowledge back to Rome. Accordingly, Franco Rasetti went to Pasadena and then Berlin, Emilio Segrè to Amsterdam and Hamburg, and Edoardo Amaldi to Leipzig and Cambridge. By the early 1930s they were seasoned veterans, carrying out world-­class research. At this point Fermi decided that he would personally conduct experimental research, not simply act as 62

excep tional creativit y in physics

an adviser to the cohort around him. In particular, while the group’s expertise was in atomic physics, the discovery of the neutron strongly suggested a move to nuclear physics. In many ways this would be a new experimental effort, and Fermi now felt he wanted to participate fully in this line of inquiry. Probably the most significant year in Fermi’s career was 1934. It marked the high point of his research in theoretical physics with his introduction of the weak interactions, now recognized as one of the four fundamental forces of nature. These interactions explained how it was that nuclei could decay by the emission of electrons and an unobserved particle—­the neutrino (a name coined by Fermi). At the same time he began the group’s research on the bombarding of nuclei by slow neutrons, an enterprise that would lead to his being awarded the Nobel Prize in physics in 1938. The close connection of experimental and theoretical physics became a hallmark of the Rome school. Fermi is now recognized as the only twentieth-­century physicist to have been a world leader in both. At the same time, both the style of approaching problems and the informal atmosphere in Rome had significant effects.

Mentoring Fermi was clearly a great mentor, building in Rome a school of physics from almost nothing, although the importance of Corbino at crucial junctures should not be underestimated. Moreover the school’s influence was lasting, strong enough to survive even Fermi’s 1938 departure for the United States, a move motivated by Italy’s adoption of racial laws. Comparing Fermi’s career in this respect to that of the other great physicists born in the 1900–1902 period—­namely Pauli, Heisenberg, and Dirac—­is interesting. The last of the three was always a solitary figure and, though revered by the physics community, had very little direct contact with the younger generation. Pauli continued to be a significant educator, but his influence waned after the 1930s. Heisen63

gino segré

berg, on the other hand, was a figure of great prominence in German physics, but his opposition to large-­scale accelerators and his insistence on pursuing theoretical directions of his own choosing seem to have had on the whole a negative effect on physics in his home country and very little influence elsewhere. Fermi, on the other hand, with his practical and flexible approach to physics, continued to be a dominant figure on the world stage. He was certainly one of the key individuals in the building of the atomic bomb and after the war made the University of Chicago the United States’ preeminent institution for the study of physics. During the period up to his early death in 1954, he had a great influence on theoretical and experimental physicists. Fermi at Chicago acted very much like he had in Rome, though he was by now twenty years older. One notices once again his modesty, his insistence on carrying out manual labor himself, his flexible and pragmatic approach to physics problems, and his continuous informal dialogue with students. Many of these students, including several who have gone on to great prominence, produced a memoir of this period published by the University of Chicago Press in which they recall how much they learned from Fermi. The book is called simply Fermi Remembered. Fermi played a very important role in guiding younger physicists throughout his career, shaping the research communities in Rome, and later in Chicago, in decisive ways.

The Individual One could arguably say that Fermi’s direct influence on the physics community during the 1934–1954 period was as great as, or greater than, that of any other individual. The times were such that his particular sets of skills were the ones required for progress. Not an intellectual revolutionary, Fermi was the consummate problem solver, capable of taking a complicated set of conflicting notions and then separating the wheat from the chaff, eventually seeing a relatively simple way of

64

excep tional creativit y in physics

obtaining an answer. Though well versed in mathematics, he never looked for mathematical elegance in his solutions or for conceptual leaps. One cannot imagine him formulating the Heisenberg uncertainty relations or the Dirac equation, much less Einstein’s theory of general relativity, but he moved easily from field to field in physics, making notable contributions in almost all areas. No other figure in twentieth-­century physics has exhibited the same breadth. This same flexibility and focus on up-­to-­date problems meant that Fermi never lost touch with the physics community’s problems. When he died in 1954, aged fifty-­three, he was regarded as being at the very height of his powers and influence. The continued respect for Fermi led to the United States’ largest particle research facility being named in 1974 the Fermi National Accelerator Laboratory, or more commonly, Fermilab.

Additional Factors In addition to the time, the place, the environment, the mentoring and the individual, which have been discussed in relation to Bohr and Fermi, other salient factors relevant to the development of quantum mechanics and nuclear physics were as follows.

Potential What is the likelihood for extensions and ramifications of the creative endeavors? It was clear almost from the start that quantum mechanics would have profound intellectual and practical ramifications. The behavior of, for example, metals and insulators, and the nature of chemical bonding, could now be understood. Tools such as the electron microscope were readily within grasp. The wealth of interesting problems and applications drew many workers into the field and provided them with interesting careers and employment possibilities. The emerging

65

gino segré

electronics and telecommunications industries became aware at this time of the connection between this line of research and commercial applications, as we see in the founding and growth of Bell Laboratories. Less clear at the onset is what the applications of nuclear physics might be. That changed with the discovery in late 1938 of nuclear fission and the realization of potential for both peaceful and military uses of this new energy source. Fermi immediately became the world leader in this new quest, heading the successful effort to build the first nuclear reactor and then moving to Los Alamos, where he was one of the key developers of the atomic bomb. Nuclear fuel continues to be a major energy source, and nuclear weapons continue as a threat to world peace.

Flexibility What is the importance of changing the directions of research? Bohr remained keenly aware of emerging scientific challenges for his institute. In the 1930s he shifted the institute’s research program toward nuclear physics and later in the decade to an experimental biology program as well. A cyclotron was built, and under the guidance of Bohr’s old friend George Hevesy, the Bohr Institute became a world leader in the use of radioactive isotopes for research and applied purposes. Fermi, having with considerable effort established an important research group in atomic physics, opportunistically moved his team to an even more significant endeavor in nuclear physics. When the Second World War was over, now at the University of Chicago, Fermi saw the future of research in elementary particle physics that would follow from the new class of accelerators and became a pioneer in this field.

Funding What should be the balance between funding directed to individual researchers and that assigned to institutions? How much leeway should there be in making use of research funds? Bohr’s activities in fundraising were far more extensive than Fermi’s 66

excep tional creativit y in physics

and are also well documented, so they provide the principal focus of this discussion. Very soon after the Bohr Institute was completed, Bohr set to work in order to raise the necessary funds for enlarging it. Physics was fortunate that he was a selfless individual with immense mental and physical energy. By 1924 the layout of the institute had clearly become insufficient. It was a three-­story building with a lecture hall, a library, and office space on the first two floors. The Bohr family, originally living on the third floor but now numbering five sons, needed more room. Plans began for two more buildings, one of which would house the Bohrs and the other a dedicated experimental facility. Grants from the government and two Danish foundations, the Carlsberg and the Rask-­Oersted, initially provided most of the necessary funds for building the Bohr Institute. They continued to support Bohr, but in the early 1920s he began to look abroad as well. His greatest success came from the new emerging economic power, the United States. In 1923 John D. Rockefeller founded the IEB (International Education Board), which fifteen years later would become part of the Rockefeller Foundation. During that same year Bohr paid his first visit to the United States. Having received the Nobel Prize the year before, Bohr was recognized as a commanding intellectual figure even though he was not yet forty. In November 1923 he made a compelling presentation to the IEB, after which his institute was awarded forty thousand dollars—­the first grant awarded by the IEB to a physics research institute. Danish providers and the city of Copenhagen rapidly met the IEB’s condition that funds for buildings and instruments would be provided only if additional grants from other sources were obtained. By 1926 the new buildings were ready. Once the Bohr family moved to the adjacent villa, some of the space freed on the third floor of the old building was converted into a small apartment for visiting guests. Heisenberg, befriended by Bohr four years earlier, was the first to occupy it. Though some of the young physicists arriving in Copenhagen came with funds from their own home countries, the two largest sources of 67

gino segré

support were the Danish Rask-­Oersted Foundation and the IEB, which in 1924 instituted a set of one-­year fellowships. Commonly known as Rockefeller Foundation Fellowships, these were designated for young researchers in the natural sciences. Of the more than sixty young visitors who stayed at the institute for substantial periods of time during the 1920s, thirteen came with funds from the former and fifteen with funds from the latter. Pauli was in the first group and Heisenberg in the second. Juggling funds from several sources, Bohr managed to maintain a certain degree of freedom for extraordinary efforts and made use of this freedom wisely. The story of George Gamow, later the founder of big-­bang cosmology, illustrates how Bohr employed this capacity. Gamow, arriving from Russia with a three-­month stipend for study in Göttingen, made there an important discovery in how to apply quantum physics to nuclear decay. When his three months in Göttingen were over, he decided to return to St. Petersburg via Copenhagen in order to see Bohr and tell him about his work. Realizing immediately its importance and discovering that Gamow had funds for only one day in Denmark, Bohr asked him if he would like to stay for a year if a stipend were to be provided. The answer from an astounded Gamow was of course an enthusiastic yes. Bohr’s desire to keep his institute in the forefront of both theoretical and experimental nuclear physics research and to enhance a related experimental biology program required massive efforts to obtain funding, which were particularly difficult in a small country such as Denmark that had little national support for research. Yet Bohr continued to be successful, in large part thanks to his longtime ties to the Rockefeller Foundation. Unlike Bohr, Fermi’s fundraising efforts were relatively insignificant, but others paved his way. In Rome, Corbino was the gray eminence, first obtaining a professorship for Fermi and later ensuring that the group had what it needed for success in terms of positions, facilities,

68

excep tional creativit y in physics

and funds. In the United States, once the threat of war and the possible importance to it of nuclear weapons became real, there was no question of Fermi’s being held back by lack of resources.

The Multiplier Effect How does a successful endeavor stimulate followers to emulate it? The successes in Copenhagen and Rome led others to try to re-­ create what they had experienced. In the case of the Bohr Institute, many carried away the characteristics of what came to be known as the Copenhagen Spirit, as seen in Weisskopf ’s and Bloch’s early tenures as director general of CERN (European Center for Nuclear Research). Sometimes this effect carried over even after an individual had left the field of physics, for example, Max Delbrück’s guidance of the phage group at Cold Spring Harbor Laboratory in the United States. In each of these cases—­and there are others—­Bohr and his Copenhagen institute acted as model and inspiration. Fermi’s influence is seen most directly in postwar Italy, where the legend of his group has appeared in books and even films. It is clear that the relative importance of high-­energy physics in Italy and the desire to integrate research in theory and experiment is in large part due to the wish to carry on this tradition.

Conclusion We have tried to identify some of the factors that have led to exceptional creativity in physics, illustrating how they have taken shape in two examples, that of Bohr’s Institute for Theoretical Physics in Copenhagen and in the group working with Fermi in Rome. Though the details of implementation are different, I believe the criteria are sufficiently general to have wider relevance and could easily be applied to other examples in physics, such as Cambridge’s Cavendish Laboratory under Rutherford’s direction, and perhaps further still to areas of science beyond physics. 69

gino segré

Bibliography Bohr, Niels. Collected Works. 12 vols. Edited by Léon Rosenfeld, Erik Rüdinger, and Finn Aaserud. Amsterdam: North Holland/Elsevier, 1972–2007. (Includes some correspondence and commentary.) Cronin, James W., ed. Fermi Remembered. Chicago: University of Chicago Press, 2004. Fermi, Enrico. The Collected Papers of Enrico Fermi. Volume 1, edited by E. Amaldi, H. L. Anderson, E. Persico, F. Rasetti, Cyril S. Smith, A. Wattenberg, and Emilio Segrè. Chicago: University of Chicago Press, 1962. Pais, Abraham. Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press, 1986. —­—­—­. Niels Bohr’s Times: In Physics, Philosophy, and Polity. New York: Oxford University Press, 1991. Pauli, Wolfgang. Scientific Correspondence with Bohr, Einstein, Heisenberg, and Others. Volume 1, edited by K. von Meyenn with the assistance of A. Hermann and V. Weisskopf. New York: Springer, 1979. Segrè, Emilio. Enrico Fermi: Physicist. Chicago: University of Chicago Press, 1970. Segrè, Gino. Faust in Copenhagen: A Struggle for the Soul of Physics. New York: Penguin, 2008.

70

cha p t er t hre e

Physics at Bell Labs, 1949–1984 Young Turks and Younger Turks Philip W. Anderson

Physics at Bell Telephone Laboratories had its beginnings long before the Second World War. Not only had C. J. Davisson and Lester Germer demonstrated electron diffraction already in the 1920s—­and won the 1937 Nobel Prize with it—­but there had been a number of other quite fundamental discoveries: Johnson Noise in resistors, for which Harry Nyquist provided the theory, and radio astronomy are examples. Perhaps even more remarkable were the fundamental advances in mathematics that came out of the prewar Labs, in stochastic theory and, of course, information theory. Bell Labs had from its beginnings in the 1920s made a point of hiring absolutely first-­rate scientists, beginning with H. A. Arnold, and the generation that went through the war was no exception: it contained such future stars as Charles Townes, William Shockley, Jim Fisk, Sid Millman, John Pierce, and Walter Brattain, to mention only a few, and the mathematicians Claude Shannon and S. O. Rice. Yet the discoveries mentioned above had in common that they had been made in the course of work motivated entirely by direct applications to communications systems. While the Labs had the enlightened policy of allowing a scientist to carry such serendipitous findings to

71

philip w. anderson

some kind of fruition, and to allow their publication, there was no hint of management encouraging purely curiosity-­driven research. In the late 1930s, however, a very small nucleus with a somewhat broader agenda began to accumulate, on the initiative of M. J. Kelly, the research vice president of the time; he seems to have had in mind from the first the possibility of a semiconductor amplifier, but he realized that his purposes were better served by starting a broad-­gauge program in the quantum physics of solids. Bill Shockley was the core and was clearly hired for the purpose, but other names that might be mentioned are Jim Fisk, Stan Morgan, Walter Brattain, and Gerald Pearson. I have read about this group, specifically Kelly, Shockley, and Fisk, calling themselves the “Young Turks,” implying a revolutionary attitude, and from this time many people date the origin of a research-­ style atmosphere in the Labs. I do not have personal knowledge of that period, but two anecdotes might illustrate that the group was looking at physics with a very broad perspective. The first is that I remember studying, with Gregory Wannier, a long prewar paper by Shockley and a colleague named Foster C. Nix on order-­disorder phenomena in alloys; the second is the well-­ known fact that Shockley and Fisk took upon themselves the task of designing a hypothetical nuclear reactor in 1941 or so, with no knowledge of the Manhattan Project, and succeeded so well in reproducing Enrico Fermi’s graphite reactor that the U.S. government became quite disturbed when they tried to patent it after the war. Neither project seems to have had anything to do with the telephone industry. Undoubtedly, physics on the applied level contributed enormously to winning the war—­for instance, the Labs were the site of choice for the development of the English invention of the magnetron microwave generator, which was put in the hands of a small group under Fisk. This device was instrumental in winning the Battle of the Atlantic and in the Allies’ continuing superiority in radar. Scientifically more significant, the silicon crystal detector, which was a key component of every microwave radar set, underwent steady development. But as the war 72

physics at b el l l abs

wound down in 1945, the management and the working staff began to see fantastic prospects ahead of them, due to the incredible new technologies and materials they had available from wartime developments. Charlie Townes, for instance, immediately began to apply his wartime microwave radar skills to a series of studies of the spectroscopy of molecular gases like ammonia (NH₃) and hydrogen cyanide (HCN). To an obscure graduate student at Harvard University these results were just what my thesis needed, and they seemed to me clearly much more accurate and professional than the competition at Duke University and Oxford University; I developed therefore a determination to go to Bell Labs if I possibly could. But the atmosphere at Bell for pure research had not yet become totally friendly as, by the time I got there in 1949, Charlie had departed for Columbia University, where he felt more comfortable following his own scientific imperatives—­although I believe he had received from Bell Labs all kinds of assurances of total freedom. Mervin Kelly, Bill Shockley, Jim Fisk, and their allies immediately began a program of aggressive hiring across a number of fields of physics. Quite contrary to the impression that some books about that period leave, this hiring was not focused only on the dream of a semiconductor amplifier. A big department called Physical Electronics made a series of magnificent hirings such as Julius Molnar, Ken McKay, John Hornbeck, and Conyers Herring; the first three segued into high management positions when the physics of the electron tube—­physical electronics—­became less urgent. The then-new insulating magnetic materials, the ferrites, led to the hiring of John Galt and Charles Kittel. Bernd Matthias and I, and at least two others eventually, were aimed at ferroelectricity. It may have been an indicator of Bell management’s interest in solid-­state physics generally that I had read, as a graduate student, about just these topics in enthusiastic factual pieces by J. J. Coupling (aka John Pierce of Bell Labs) in Astounding Science Fiction magazine, sandwiched between stories from the Golden Age of scifi, by writers such as Isaac Asimov, Robert Heinlein, and A. E. van Vogt. 73

philip w. anderson

I can by no means give a complete picture of this period of expansion: many other scientists appeared in the two early physics research departments, either from within the Labs’ war effort—­such as Homer Hagstrum, Herb McSkimin, Alan Holden, and H. J. Williams—­or as new employees. Then, as soon as it became evident that not just the transistor, but also a number of the other initiatives such as ferrite cores and steering magnets, had real economic potential, and the Labs’ gamble on solid-­state physics was likely to pay off, a steady stream of new colleagues—­and soon a number of spin-­off departments in disparate fields—­appeared. But I don’t want to give the impression that the growth was an untrammeled exponential; much of it involved internal cannibalization, and in that early period there was also a lot of exodus to the development side or up into management or both. I have learned on good authority that the Physical Research department at Bell Labs hardly grew in size after the 1950s. But something else very interesting happened during that decade to a decade and a half, something rather fascinating from the point of view of the history and sociology of science. When I arrived in early 1949 the Labs still had mostly the characteristic mores of an industrial laboratory—­a very enlightened one, with no time clocks and no rigid requirements of shirts, jackets, and ties (as was the case at IBM at that time). Even so, we all worked more or less from 8:45 a.m. to 5:15 p.m.—­our equivalent of 9 to 5. What we did was expected to be more or less what our supervisors assigned us to do, which they in turn justified to the higher-­ups in terms of its relevance to the telephone business—­again, with a very enlightened view of what might be relevant, as I explained above. And all of us underwent a compulsory tour of indoctrination in which we learned how the telephone business actually worked. There was even a course meant to bring the engineers who came with mere bachelor’s degrees up to snuff; one of my less pleasant early assignments was to teach atomic physics in this “Kelly College,” as it was called. Relationships were very hierarchical, in terms

74

physics at b el l l abs

of salary structure, privileges, and responsibilities. And management was unquestionably the business of management: you never knew what decision had been taken until it happened, including decisions about your own fate. (I was all but fired after the first year, but learned this fact only years later.) Relative salary levels were utterly secret. Papers were prepared first in internal memorandum form, and underwent review by management for scientific content, and then by the patent lawyers. Some very sound work of mine never made it beyond the memo stage—­but after all, everyone who mattered was on the internal distribution list. Yet there were compensations: the Labs were also paternalistic. A senior scientist was assigned to listen to my first effort to give a ten-­ minute paper at the American Physical Society meeting and to give me helpful advice; and the management style was, and remained for many years, to use the lightest touch and absolutely never to compete with underlings. (This was the taboo that Shockley transgressed, for which he was never forgiven.) Corresponding to the hierarchical management structure, there was almost no requirement to justify one’s own work—­that was your supervisor’s responsibility—­and no one ever wrote a proposal, hard though this may be to believe today. Incidentally, seducing the female help was, and as far as I know still is, absolutely forbidden. That was a rule, but the culture was such that divorce was very rare, oddly enough, even among the newly arrived scientists. So what happened then? This is clearly not the Bell Labs that many people are familiar with, the Bell Labs that could be thought of as the eighth member of the Ivy League, except that it didn’t play football and the scientists had more freedom. Thinking about it after all this time, the change almost seems to have been inevitable. The key is that in that early burst of hiring after the war, and the next round brought on by the exhilaration of success, the Labs had done far too well: they had all but cornered the market. At the same time, the rest of the world

75

philip w. anderson

was more and more waking up to the fact that solid-­state physics was something worth pursuing. Finally, the people they had hired were of a type and quality who were certain to find the status of docile wage slave, no matter how paternally coddled, a bit uncomfortable. Management had two alternatives. One was to let us all go, once we had realized our value in the outside world—­to replace us with more docile, if less creative, scientists who would do what they were told. This is the alternative that Shockley forced on the Labs in the case of John Bardeen, and that seems to have been chosen, to the Labs’ serious detriment, elsewhere in the semiconductor program. It is very much to the credit of the Labs’ management of the time—­people like Stan Morgan, Addison White, Sid Millman, and soon W. O. Baker—­that they chose the other alternative, namely to switch to a very different management style in order to hold on to what they realized was an irreplaceable asset. Except in one instance, I don’t think this decision was a conscious choice; it was instead a response to a series of incidents, some of which I’ll recount here. After the loss of Bardeen to the University of Illinois, the Shockley-­Morgan department was divided up—­the implication being, so as to save us from Shockley’s insatiable appetite for peons—­ and those of us not working on semiconductors remained with Morgan, the magnetic types having as a kind of straw boss Charlie Kittel. But one could hardly break into the exciting new world of NMR and EPR without a magnet, even though telephones needed no fields bigger that 0.1 tesla; so almost the first real instrument purchase was a Bitter magnet. I still have in my files the memo of the scheduling meeting in 1950 in which we shared out the early experiments—­no hierarchy there. But then, in a few years, we lost Kittel to the University of California at Berkeley, as a replacement for its losses to the notorious Loyalty Oath. How did we get into low temperatures? All prewar work had been at room temperature—­temperatures below 250 K seldom being encountered by telephone equipment at that time. This, too, is a story of response to outside pressure. Bernd Matthias had found us five or six 76

physics at b el l l abs

fascinating new ferroelectrics, but more importantly he had trained a high-­school graduate technical assistant1 named J. P. Remeika in the crystal-­growing techniques that eventually made Remeika famous. But in 1951 Bernd submitted to the blandishments of the University of Chicago, and actually spent a year there on leave learning, with John Hulm, the rudiments of superconductivity. In order to entice him back, we had to let him continue in what then seemed like the purely academic field of superconductivity, to buy an A. D. Little Collins helium liquifier for him and provide another technical assistant to keep it running—­ and of course, its output soon became the mother’s milk of a dozen experiments by others as well. This was not a sabbatical leave, but he treated it as one. Bernd, incidentally, cast the first stone in breaking our dress code—­ when the vice president’s administrative assistant objected to his not wearing socks, he was told to get lost. Bernd was very much the kind of person I have in mind in my title referring to “younger turks”; he was not about to live by any conventional code. Where I managed by simple naivety and dumb luck, assuming that the Labs would not be so stupid as to fire me, Bernd had incredible networking skills, long before the term had been invented, and he came back from Chicago close friends with the whole Fermi circle—­Harold Urey, Marvin Goldberger, W. H. Zachariesen, Gregor Wentzel, and several others; he was not above practicing these skills on whomever had financial control over him. Travel, especially overseas, was the traditional Bell Labs reward for long service, preferably in management. One was expected to travel in a group, first class, to carefully specified sites, and to come home with a formal written report of all the technology one had witnessed. When I was invited to be a Fulbright lecturer in Japan and to attend a posh international conference on the way, I was oblivious to this tradition, and blithely assumed that acceptance was up to me—­hence I moved a little closer to the sabbatical idea—­though by no means was it a leave with pay. (And the first-­class ticket for the journey turned out to be a Fulbright-­provided berth on the rickety Hikawa Maru, the last ship 77

philip w. anderson

of the Japanese navy.) Finally, Conyers Herring became the first to manage a formal sabbatical leave, when he went off to the Institute for Advanced Study in Princeton in 1954 or so—­and while I know of no competitive threat from the IAS, it was relevant that when the Labs first began to mend its uncompetitive salary structure, they began with a 40 percent raise for Conyers, who was obviously irreplaceable. I remarked that there was one case in which the change in culture was totally deliberate. The population most at risk seemed to be theorists. We had already lost Bardeen and Kittel, were on the point of losing Wannier, and also Harold Lewis, which was another odd story. Lewis came to Bell from the IAS, presumably via Conyers. He had been a student of Robert Oppenheimer, and had had a job at UC Berkeley, which he gave up because of the Loyalty Oath. We had had our own loyalty questionnaire, which a very few of us had not signed—­ Wannier, myself, and maybe a few others—­but with no perceptible consequences. Perhaps that was when we began to feel even a little bit superior to the academic world. Anyhow, Lewis was hired with hardly a hiccup—­after all, he kept his Q security clearance through the whole oath nonsense, and the Labs was not naive on matters of security and secrecy. But he was more of a danger to the Labs than they knew; he had been an academic and made us aware of what we were missing. When we requested in 1955 a separate subdepartment for theorists, to our surprise the powers that be (at that time Addison White and vice president Baker) did not argue but merely asked how we would like it to be. The resulting design was almost all Harold’s work, consisting of postdocs, a rotating boss on whose identity we were consulted, sabbaticals, a travel budget under our control, and a spectacular summer visitor program, which for a few years, until other summer programs opened up, attracted an elite bunch. One of the reasons for our success with management was the fact that for several years we had had Walter Kohn and Quin Luttinger as regular summer visitors, and they had become so useful that our bosses desperately wanted to attract them permanently. In the case of Walter, not only did they fail, but the shoe 78

physics at b el l l abs

ended up on the other foot. He went soon after to the new university at La Jolla and used his insider knowledge of Bell to hire away three of our stars, Matthias, George Feher, and Harry Suhl, starting off his new physics department with a bang. One more story: How did the Labs ever get a biophysics department? That was done by perhaps the brashest Young Turk of them all, Bob Shulman. Soon after the theory department had gained its extraordinary privileges, the rest of Physical Research began to demand equal rights. Bob managed on that basis to wangle a sabbatical leave to take a visiting professorship in Paris, lecturing on the magnetism of transition metal fluorides, his subject of the moment. That same year, 1961, I had finished my stint as department chair and was invited to lecture in Cambridge and be a visiting fellow of Churchill College. Bob had said vaguely he might see me in England, and then set off across the Pacific with family in a slow boat toward Paris. What followed resembles nothing so much as the story of the Boll Weevil song—­after three weeks in Paris he offered himself at the door of Francis Crick’s operation in the Cavendish Laboratory’s courtyard in Cambridge and was given space and a lab partner; next he was renting Leslie Orgel’s centrally heated house; and then I got him dining rights at Churchill, where he got along like a house afire. He even brought home a Jaguar. By the end of the year he was a full-­fledged molecular biologist, and he soon talked Bell management into letting him attract a few friends and start doing nuclear resonance on biosystems. The department that ensued outlasted most of the rest of Bell’s research, for good reason, though Bob himself left it three decades ago for Yale University. There are many more stories I could tell: how an eminent astrophysicist finally wore out the patience of our long-­suffering management in the rebellious 1960s; of the Russian visitors, the yellow signs, and how Bob and I got our phones tapped; and, more seriously, of how we got into nuclear physics. But I hope I have given a bit of the flavor of those days, and also of the social and economic system that made it possible. For three decades it seemed as if the research end of Bell Labs couldn’t 79

philip w. anderson

turn around without inventing something extraordinary. The transistor, of course. The first three-­level maser; high-­field superconductivity; the laser, of course, in three or four manifestations, the semiconductor one being the most useful. But also the light-­emitting diode (LED), fiber optics, the communications satellite, liquid-­crystal displays, the cellular telephone system, in vivo nuclear resonance, functional magnetic resonance imaging (fMRI), molecular beam epitaxy (MBE), the Josephson effect—­you name it. As long as the company remained a unity, whatever we invented was bound to come back to us and be useful in the end. This concealed the fact that Bell Labs was extraordinarily poor at exploiting its technology economically. Almost all of the above inventions were either never exploited by Bell, or exploited only after being put through stages of development by others; the story of semiconductor technology told in Crystal Fire: The Invention of the Transistor and the Birth of the Information Age by Michael Riordan and Lillian Hoddeson is not at all atypical. There were plenty of cases, for instance, where we in the Research Department ended up manufacturing a device for the telephone system because our development engineers couldn’t, or wouldn’t, do it. Why this ineptness was so is not part of my story. It was a management failure on very many levels, mostly the highest and best paid. One part of the blame has been put on the undoubted fact that after we in the pure research nucleus managed to change the mores, because our sensible managers felt we had earned it and would depart if we were not indulged, the rest of the Labs insisted on doing the same, because creative “research” seemed to earn all the goodies. There was not enough acceptance of the necessity for peons as well as Young Turks. This is undoubtedly true, to an extent, but the decline of Bell Labs was far more precipitate than this single reason can explain. I personally believe that if they had been managed through the post-­1984 crisis with the flexibility and intelligence exhibited in those early days, rather than with greed2 and executive suite hubris, the Labs might still be with us.

80

physics at b el l l abs

Notes 1. The practice in those years was to endow each new experimentalist MTS (member of technical staff) with a one-­room laboratory, funding for equipment to fill it, and a TA (technical assistant), often a graduate of a local vocational school, to help build, assemble, and operate the equipment. 2. It is worth noting that in the years of regulation, the salaries of our wonderful managers were never off-­scale relative to those of the scientific “stars”—­they were perhaps larger by a factor of two (I have no real information on this, but I saw the managers’ lifestyles, and we were close friends with some of them)—­a contrast with today.

Bibliography Anderson, Philip W. More and Different: Notes from a Thoughtful Curmudgeon. Singapore: World Scientific, 2011. Bernstein, Jeremy. Three Degrees above Zero: Bell Labs in the Information Age. New York: Scribner, 1984. (This book best captures the atmosphere of the culture described above, which even then applied to only a small minority of Bell Labs staff as a whole. In full disclosure, I should state that I mentored the author in his effort.) Gertner, Jon. The Idea Factory: Bell Labs and the Information Age. New York: Penguin Press, 2012. Riordan, Michael, and Lilian Hoddeson. Crystal Fire: The Invention of the Transistor and the Birth of the Information Age. New York: W. W. Norton, 1997.

81

Cha p t er Fou r

The Usefulness of Useless Knowledge The Physical Realization of an Electronic Computing Instrument at the Institute for Advanced Study, Princeton, 1930–1958 George Dyson

The pursuit of these useless satisfactions proves unexpectedly the source from which undreamed-­of utility is derived. Abraham Flexner, “The Usefulness of Useless Knowledge” (1939)

“What could be wiser than to give people who can think the leisure in which to do it?” wrote economist Walter Stewart to Abraham Flexner, as the Institute for Advanced Study (IAS) prepared to open the doors of Fuld Hall, its new headquarters on the site of the former Olden Farm in Princeton, New Jersey, in 1939.1 How has Flexner’s effort to encourage exceptional creativity stood up over time? Seventy years later, Albert Einstein remains the institute’s most famous resident—­ remembered more for breakthroughs achieved while he was working as a patent clerk in the first decade of the twentieth century than for any of the work he did during his years of leisure at the IAS. “I hope that after another seventy-­five years the institute has someone else to celebrate besides Einstein,” remarked James Wolfensohn, outgoing chairman of the trustees, on the occasion of the seventy-­fifth anniversary of the incorporation of the IAS.2 The origins of the Institute for Advanced Study can be traced to 83

george dyson

topologist Oswald Veblen (1880–1960), who “conceived the whole project,” in the assessment of physicist P. A. M. Dirac.3 Oswald was the nephew of social theorist Thorstein Veblen (1857–1929), who coined the phrase “conspicuous consumption” in his 1899 masterpiece The Theory of the Leisure Class. In his 1918 The Higher Learning in America, Veblen had called for “a freely endowed central establishment where teachers and students of all nationalities, including Americans with the rest, may pursue their chosen work as guests of the American academic community at large.”4 The first of eight children, Oswald Veblen attended public schools in Iowa City, followed by the University of Iowa, where he was awarded one prize in sharpshooting and another prize in math. He took time off from his studies to travel down the Iowa and Mississippi Rivers in the style of Huckleberry Finn, and remained an avid outdoorsman until the day he died. An attachment to the soil ran deep in his Norwegian blood. “He is a most excellent person, but the word ‘building’ or ‘farm’ has an intoxicating effect upon him,” warned Abraham Flexner in 1937.5 “It has frequently happened that an attempt to solve a physical problem has resulted in the creation of a new branch of mathematics,” Veblen wrote in October 1923 to Simon Flexner (the elder brother of Abraham), who was director of the Rockefeller Institute for Medical Research, urging the Rockefeller Institute to extend its existing National Research Council fellowships, focused on physics and chemistry, to include mathematics.6 Veblen, who had transformed the theory and practice of ballistics at the U.S. Army’s Aberdeen Proving Ground during the First World War, and the teaching of mathematics at Princeton University afterward, was now seeking to break some of the distinctions between pure and applied mathematics. Simon Flexner responded favorably, and four months later Veblen returned with a more ambitious request. “The way to make another step forward,” he wrote, “is to found and endow a Mathematical Institute. The physical equipment of such an institute would be very simple: a library, a few

84

th e usefulness of usel ess knowled ge

offices, and lecture rooms, and a small amount of apparatus such as computing machines.”7 Flexner answered, “I wish that sometime you might speak with my brother, Mr. Abraham Flexner, of the General Education Board.”8 Congress had chartered the Rockefeller Foundation’s General Education Board in 1903 for “the promotion of education within the United States of America, without distinction of race, sex, or creed,” and although primarily focused on high-­school education in the American South, the board was free to support higher education of any kind. Simon and Abraham Flexner, the fifth and seventh of nine children, were born in Louisville, Kentucky, and reached the Rockefeller Foundation through very different paths in life. Their father, Moritz Flexner, was an immigrant peddler from Bohemia who settled in Louisville in 1854, carrying his wares on his back until he saved enough money—­ four dollars—­to buy a horse. Simon Flexner (1863–1946) dropped out of school after the seventh grade, drifting from one menial job to another until a position at a pharmacy sparked a late-­blooming interest in medicine that led, eventually, to his becoming an authority on infectious diseases and director of the leading microbiological research institute in the United States. Abraham Flexner (1866–1959), the only Flexner to attend college (thanks to his elder brother Jacob), graduated from Johns Hopkins University in 1886. He returned to Louisville to teach Latin and Greek at Louisville High School, and established his reputation as an educator, in 1887, by flunking an entire class. He then opened his own school, and sold it at a profit in 1905, going first to Harvard University for a degree in psychology and philosophy, and then to Germany where he wrote a scathing critique of American higher education, published in 1908. The Carnegie Foundation then commissioned him to compile a report on medical education in the United States and Canada; he visited some 155 medical schools, and his exposure of their deficiencies resulted in the closure of two-­thirds of the medical schools in the United States.

85

george dyson

Veblen’s proposal to the Flexners lay dormant until the philanthropic interests of Newark merchant Louis Bamberger helped bring his dream of an autonomous institute back to life. Bamberger (1855–1944) was born above his father’s dry goods store in Baltimore, and in 1892 went into business for himself, transforming a rented storefront in a blighted neighborhood of Newark into a department store that, by 1928, occupied 1 million square feet, with 3,500 employees and over $32 million in annual sales. In June 1929 Bamberger and his sister Carrie Fuld negotiated a sale to R. H. Macy & Co. that closed in September, just six weeks before the stock market crash. They took $11 million of the proceeds in cash, distributed $1 million among 225 employees who had served fifteen years or more, and enlisted their chief accountant, Samuel D. Leidesdorf, and legal advisor Herbert H. Maass to help decide how to disperse the rest. They intended to endow a medical college, with the preferred location being their own thirty-­acre South Orange estate. Maass and Leidesdorf were repeatedly referred to Abraham Flexner as the authority they should consult. “His advice to us was that there were ample medical school facilities in the United States,” Maass explained.9 This was Flexner’s chance. He had been preaching about the deficiencies of higher education for many years, and when Maass and Leidesdorf visited, in December 1929, he had the proofs of his forthcoming book, Universities: American, English, German, sitting on his desk. His guests took a copy with them when they left. The book, expanding upon the Rhodes lectures Abraham Flexner delivered at Oxford University in 1928, gave a depressing account of higher education in America, concluding with a call for the outright creation of . . . a free society of scholars—­free, because mature persons, animated by intellectual purposes, must be left to pursue their own ends in their own way. Administration should be slight and inexpensive. Scholars and scientists should participate in its government; the 86

th e usefulness of usel ess knowled ge

president should come down from his pedestal. The term “organization” should be banned. The institution should be open to persons, competent and cultivated, who do not need and would abhor spoon-­feeding—­be they college graduates or not.10 In explaining the thinking behind the Institute for Advanced Study, Flexner credited his parents for being “shrewd enough to realize that their hold upon their children was strengthened by the fact that they held them with a loose rein.”11 This principle guided his educational philosophy, even though “to be sure, we shall thus free some harmless cranks.”12 On May 20, 1930, with Flexner appointed as the first director, a certificate of incorporation was signed for “the establishment, at or in the vicinity of Newark, New Jersey, of an institute for advanced study, and for the promotion of knowledge in all fields.” The Bambergers committed $5 million to start things off. “So far as we are aware,” they announced in their initial letter of instruction to the trustees, there is no institution in the United States where scientists and scholars devote themselves at the same time to serious research and to the training of competent post-­graduate students entirely independently of and separated from both the charms and the diversions inseparable from an Institution the major interest of which is the teaching of undergraduates.13 The new institute existed only on paper, envisioned by Abraham Flexner as “a paradise for scholars who, like poets and musicians, have won the right to do as they please.”14 Creating paradise, even with $5 million during the Great Depression, was easier said than done. Criticizing higher education, as Flexner had been doing for twenty-­two years, was one thing; building a new institution from the ground up was something else entirely. 87

george dyson

Flexner spent six months consulting with leading intellectuals and educational administrators in Europe and the United States. Classicists advised him to start with classics, physicists with physics, historians with history, and mathematicians with mathematics. Julian Huxley advised mathematical biology, arguing that “there is in biology a lamentable lack of general appreciation of a great deal of systematic and descriptive work.”15 The Bambergers wanted to start with economics and politics, because, as Flexner put it, “the plague is upon us, and one cannot well study plagues after they have run their course,”16 and they could thereby “contribute not only to a knowledge of these subjects but ultimately to the cause of social justice which we have deeply at heart.”17 Some thought the institute should be closely associated with an existing university; others thought it should be far removed. “It is the multiplicity of its purposes that makes an American University such an unhappy place for a scholar,” advised Oswald Veblen. “If you can resist all temptations to do the other good things that might be attempted, your adventure will be a success.”18 Was paradise even possible? Historian Charles Beard predicted “death—­intellectual death—­the end of many a well-­appointed monastery in the Middle Ages.”19 Future Supreme Court justice Felix Frankfurter, who scrawled “news from paradise. Not my style” across a letter from Flexner, pointed out that “for one thing, the natural history of paradise is none too encouraging as a precedent. Apparently it was an excellent place for one person, but it was fatal even for two.”20 Flexner, who had distributed some $600 million over the course of his association with the Rockefeller Foundation, believed most educational funding had too many strings and expectations attached. At the close of his career, it was time to try something else. “I should think of a circle, called the Institute for Advanced Study,” he wrote in 1931. Within this, I should, one by one, as men and funds are available—­and only then—­create a series of schools or groups—­a school of mathematics, a school of economics, a 88

th e usefulness of usel ess knowled ge

school of history, a school of philosophy, etc. The “schools” may change from time to time; in any event, the designations are so broad that they may readily cover one group of activities today, quite another group, as time goes on.21 “The Institute is, from the standpoint of organization, the simplest and least formal thing imaginable. Each school is made up of a permanent group of professors and an annually changing group of members,” he explained. “Each school manages its own affairs as it pleases; within each group each individual disposes of his time and energy as he pleases. . . . The results to the individual and to society are left to take care of themselves.”22 Flexner decided to start with math. “Mathematics is singularly well suited to our beginning,” he told the trustees. “Mathematicians deal with intellectual concepts which they follow out for their own sake, but they stimulate scientists, philosophers, economists, poets, musicians, though without being at all conscious of any need or responsibility to do so.” There were practical advantages as well: “It requires little—­a few men, a few students, a few rooms, books, blackboard, chalk, paper, and pencils.”23 He deferred to Veblen as to candidates, explaining that “mathematicians, like cows in the dark, all look alike to me.” On June 5, 1932, Oswald Veblen was appointed to the first professorship (effective October 1, 1932) followed by Albert Einstein (effective October 1, 1933). They were joined by John von Neumann, Hermann Weyl, and James Alexander in 1933, and Marston Morse in 1934. The exodus of mathematicians from Europe—­with Einstein leading the way to America—­ began just as the Institute for Advanced Study opened its doors. Its founders had envisioned their educational utopia as a refuge from the mind-­numbing bureaucracy of American universities; they had not counted on the disaster from which their sanctuary would shortly offer an escape. Veblen assumed the chairmanship of the Rockefeller Foundation’s Emergency Committee for Displaced German Scholars, using 89

george dyson

Rockefeller money and the promise of temporary appointments at the institute to counter the twin misfortunes of anti-­Semitism in Europe and a depression in the United States. The IAS operated out of an assortment of temporary facilities for its first nine years. “The way to reform higher education in the United States is to pay generous salaries and then use any sort of makeshift in the way of buildings,” Flexner had advised the Bambergers in 1932.24 “Everybody was working somewhere else,” Klári von Neumann observed upon her arrival in Princeton in 1938. Flexner had his office in one of the buildings along Nassau Street; the mathematicians had rooms in Fine Hall, which was the university’s mathematics building; the economists had some kind of an office in the basement of the Princeton Inn; and the few archaeologists who were members essentially worked in their own homes when in Princeton and then went out on location to dig.25 Veblen, retaining the sensibilities of a Norwegian American homesteader, pushed not only for buildings but for land: enough to establish a refuge for wildlife as well as a refuge for ideas. “There is no educational institution in the United States which has not in the beginning made the mistake of acquiring too little rather than too much land,” he wrote to Flexner, arguing for the purchase of Olden Farm.26 Veblen orchestrated the purchase of 610 acres before the acquisitions of adjacent farms and woodland came to an end. The School of Mathematics opened in 1933, followed by the School of Humanistic Studies in 1934, and the School of Economics and Politics in 1935. Historical Studies (amalgamating the humanists and economists) was formed in 1949, and Natural Sciences was formed as an offshoot from Mathematics in 1966. Social Science was established in 1973. A temporary group in theoretical biology was established in 1999, and a permanent Center for Systems Biology in 2005. A core group of permanent faculty was appointed for life, while visitors, with rare exceptions, were invited for one year, or less. The term

90

th e usefulness of usel ess knowled ge

extended from October through April, with no duties or responsibilities except to be in residence at the institute during that time. “The other half of the year the staff will be technically on vacation,” it was reported in 1933, “but Dr. Flexner has found that those engaged in research often do their best work while ‘on vacation.’”27 Flexner believed in generous remuneration for the permanent faculty, noting that although wealth might invite distractions from academic work, “It does not follow that, because riches may harm him, comparative poverty aids him.”28 This generosity was not extended to visitors, “for on high stipends members will be reluctant to leave.”29 Flexner was determined to avoid “dull and increasingly frequent meetings of committees, groups, or the faculty itself. Once started, this tendency toward organization and formal consultation could never be stopped.”30 The faculty did start having meetings, and, in a vote of no confidence in Flexner, decided that they would reserve the right to grant permanent appointments for themselves. Flexner resigned on October 9, 1939, and was immediately replaced (bypassing any ambitions on the part of Veblen) by his understudy Frank Aydelotte (1880–1956), a Louisville teaching colleague who had become president of Swarthmore College (and an influential Quaker sympathizer). Aydelotte presided with great diplomacy through the Second World War, granting leave to those, such as von Neumann, Veblen, and Morse, who were engaged in the war effort, while preserving a refuge for those who were not. “In these grim days when the lights are going out all over Europe and when such illumination as we have in the blackout is likely to come, figuratively as well as literally, from burning cities set on fire by incendiary bombs,” he reported in May 1941, “some men might question the justification for the expenditure of funds on Humanistic Studies—­on epigraphy and archaeology, on paleography and the history of art.” He could not reveal that the institute was already engaged in behind-­the-­ scenes support for nuclear weapons work, but he did announce the institute’s unwavering support for “the critical study of that organized

91

george dyson

tradition which we call civilization and which it is the purpose of this war to preserve. We cannot, and in the long run will not, fight for what we do not understand.”31 Aydelotte was succeeded by J. Robert Oppenheimer in 1947, who viewed the institute as “an intellectual hotel,” and established a very different style of administration from that of Flexner and Aydelotte, who had been educational administrators, but not academics themselves. Oppenheimer, who had studied physics in prewar Göttingen University, whose preeminence Flexner had sought to emulate, was American by birth and had risen, unexpectedly, to assume the leadership of the Los Alamos laboratory that built the atomic bomb. Although officially under the command of General Leslie Groves of the U.S. Army, Oppenheimer quietly took command of General Groves. “Groves never realized that he had been co-­opted to the scientific task,” says Harris Mayer. “To the end of his life he really believed that he had made the atomic bomb.”32 The scientists and mathematicians who had helped build the atomic bomb at Los Alamos soon found themselves wondering “What’s next?” Many of those who had left academia to spend the war years on a mesa in New Mexico, working side by side with engineers as well as theoreticians, found it hard to go back to a purely academic life. In addition to the lures of nuclear physics and weaponeering, the Los Alamos scientists had also been exposed to computing machines, and some of them, von Neumann in particular, were hooked. “I am thinking about something much more important than bombs,” von Neumann announced after returning to the institute from Los Alamos. “I am thinking about computers.” And he decided to build one for himself. Why build the computer at IAS? The Massachusetts Institute of Technology (MIT), Harvard University, and the University of Chicago were all offering von Neumann department-­head-­level positions, with access to their existing laboratories and any resources necessary to build and equip a new one. “How does all of this fit in with the Princetitute?” Norbert Wiener asked von Neumann in March 1945, suggesting 92

th e usefulness of usel ess knowled ge

a move to MIT from the IAS. “You are going to run into a situation where you will need a lab at your fingertips, and labs don’t grow in ivory towers.”33 Von Neumann, taking the opposite approach, decided to import a few engineers to an institute populated with mathematicians, rather than importing a few mathematicians to an institution populated with engineers. Partly this was wishful thinking: von Neumann expected more assistance from the nearby laboratories at RCA and Princeton University than he ever actually got. Partly it was a need for secrecy: the first job for the new computer was to perform the hydrodynamic calculations necessary to determine the feasibility of the hydrogen bomb. This could be done quietly at the institute, where several of the key figures in the Atomic Energy Commission, including Oppenheimer and Lewis Strauss, were already on board. It is an irony of history that the key, computational step in the development of the hydrogen bomb—­a development that Oppenheimer would later be accused of obstructing—­was conducted under Oppenheimer’s directorship at the IAS. Von Neumann’s engineers were treated as second-­class citizens at the IAS. “The coming of six engineers with their assortment of oscilloscopes, soldering irons, and shop machinery was something of a shock,” remembers Willis Ware, who was hired in March 1946. “We were doing things with our hands and building dirty old equipment. That wasn’t the institute.” Although the lack of engineering facilities presented severe obstacles (especially in a time of postwar shortages) this handicap became one of the project’s strengths, and contributed to the exceptional creativity that was its result. There was no installed base of engineers at IAS to say, “Here, we do things this way.” Von Neumann’s group could take a completely fresh approach. Because engineers were only hired when absolutely necessary, the group remained nimble and small—­fewer than a dozen engineers were ever involved at any one time. At an existing government laboratory they would have been bound 93

george dyson

by bureaucracy, while at an industrial laboratory they would have been bound to keep technical breakthroughs to themselves. Instead, they published regular progress reports, with accompanying blueprints, as rapidly as the elements of the new machine were finished, so that the computer was duplicated, almost immediately, in some fifteen other laboratories around the world. The government funding was given without constraints—­as long as the machine would be made available, for thermonuclear calculations, until Los Alamos could build a copy of it for themselves. “Professor von Neumann and I believe,” project director Herman Goldstine wrote in 1951, “that the Institute has an almost unique contract with the Ordnance Department in that the Government has, in fact, given us a grant to build a machine for ourselves.”34 The project was free of the distractions caused by fundraising that plague similar projects today—­and the computing group did not have to map out an unknown path in advance in order to secure the money to move ahead. The project operated on an annual budget of about two hundred thousand dollars for about five years. “We would have, once a year, kind of a pass-­the-­hat session in which we would sit up in the board room in the institute with these representatives of all these government agencies,” remembers engineering team leader James Pomerene, “and they would say, ‘Well, I can put in ten thousand dollars’ and another guy would say ‘I can put in twenty thousand dollars.’ And one would say ‘How about you, Joe? You’re good for thirty thousand dollars, aren’t you?’ We would have our two hundred thousand dollars put together, and it all worked fine.”35 Bureaucracy was minimal. “The Army contract provides for general supervision by the Ballistic Research Laboratory of the Army, whereas the Atomic Energy Commission provides for supervision by von Neumann,” it was noted on November 1, 1949. Because they did not have to write detailed proposals explaining what they were going to do, the Electronic Computing Project group could concentrate on getting the machine built and reporting the results. No patent applications were filed, and all details of the machine as well as 94

th e usefulness of usel ess knowled ge

its programming were placed in the public domain, in keeping not only with von Neumann’s wishes but also with those of Abraham Flexner, who had argued, in the pages of Science, in 1933, that “the scientific discoveries that have ultimately inured to the benefit of society either financially or socially have been made by men like Faraday and Clerk Maxwell who never gave a thought to the possible financial profit of their work.”36 What the institute lacked in laboratory facilities was made up for by the hospitality of Oppenheimer’s “intellectual hotel.” Temporary housing was located just across the street from the computer building, and von Neumann invited an unprecedented collection of individuals to take up residence while running their problems (the more intractable the better) on the new machine. An exceptional burst of creativity resulted from the conjunction of a completely new mathematical tool with these problems, many of them waiting for centuries, that a small group of exceptionally creative people had gathered to solve. It would have been impossible to predict which problems would be worth pursuing in advance. “I am sure that the projected device, or rather the species of devices of which it is to be the first representative, is so radically new that many of its uses will become clear only after it has been put into operation,” wrote von Neumann in October 1945 to Lewis Strauss, who convinced the Office of Naval Research and later the Atomic Energy Commission to lend support: Furthermore, these uses which are not, or not easily, predictable now, are likely to be the most important ones. Indeed they are by definition those which we do not recognize at present because they are farthest removed from what is now feasible, and they will therefore constitute the most surprising and farthest-­going extension of our present sphere.37 Mathematical logic, long the reserve of theoreticians, not engineers, suddenly unleashed a revolution on the world. Sixty years later, the five 95

george dyson

kilobytes of high-­speed random-­access memory that powered the IAS computer would cost an immeasurably small thousandth of one cent. “A long chain of improbable chance events led to our involvement,” recalls Julian Bigelow, Norbert Wiener’s wartime collaborator who joined von Neumann as chief architect and engineer. People ordinarily of modest aspirations, we all worked so hard and selflessly because we believed—­we knew—­it was happening here and at a few other places right then, and we were lucky to be in on it. We were sure because von Neumann cleared the cobwebs from our minds as nobody else could have done. A tidal wave of computational power was about to break and inundate everything in science and much elsewhere, and things would never be the same.38 The results left no aspect of the existing universe unchanged. This had been von Neumann’s intention all along. “Johnny had by then [1946] a very definite idea of how and why he wanted this machine to function, with the emphasis on the why,” Klári von Neumann reported in 1961. “He wanted to build a fast, electronic, completely automatic ‘all purpose’ computing machine which could answer as many questions as there were people who could think of asking them.”39

Notes 1. Stewart to Abraham Flexner, February 8, 1939. This and all subsequent archival documents not otherwise attributed are from the Shelby White and Leon Levy Archives Center, Institute for Advanced Study, Princeton, New Jersey. 2. May 20, 2005. 3. P. A. M. Dirac to IAS trustees, date unknown. 4. Thorstein Veblen, The Higher Learning in America (New York: B. W. Huebsch, 1918), 45. 5. Abraham Flexner to Herbert Maass, December 15, 1937. 6. Oswald Veblen to Simon Flexner, October 24, 1923. 7. Oswald Veblen to Simon Flexner, February 23, 1924. 8. Simon Flexner to Oswald Veblen, March 11, 1924.

96

th e usefulness of usel ess knowled ge 9. Herbert H. Maass, “Report on the Founding and Early History of the Institute,” unpublished manuscript, undated (ca. 1955). 10. Abraham Flexner, Universities: American, English, German (New York: Oxford University Press, 1930), 217. 11. Abraham Flexner, I Remember (New York: Simon & Schuster, 1940), 13. 12. Abraham Flexner, “The Usefulness of Useless Knowledge,” Harper’s Magazine, October 1939, 548. 13. Louis Bamberger to IAS trustees, June 4, 1930. 14. Flexner, “Usefulness of Useless Knowledge,” 551. 15. Huxley to Abraham Flexner, December 11, 1932. 16. Abraham Flexner to IAS trustees, September 26, 1931. 17. Louis Bamberger to IAS trustees, April 23, 1934. 18. Oswald Veblen to Abraham Flexner, June 19, 1931. 19. Beard to Abraham Flexner, June 28, 1931. 20. Frankfurter to Frank Aydelotte, December 16, 1933. 21. Abraham Flexner to IAS trustees, September 26, 1931. 22. Flexner, “Usefulness of Useless Knowledge,” 551. 23. Abraham Flexner to the IAS trustees, September 26, 1931. 24. Abraham Flexner to Louis Bamberger, December 1, 1932. 25. Klára von Neumann, “Two New Worlds,” unpublished manuscript, ca. 1963 (courtesy of Marina von Neumann Whitman). 26. Oswald Veblen to Abraham Flexner, April 12, 1934. 27. Watson Davis, “Super-­University for Super-­Scholars,” Science News-­Letter 23, no. 616 (1933): 54. 28. Flexner, I Remember, 375. 29. Ibid., 377–78. 30. Ibid., 366. 31. Frank Aydelotte, “Report of the Director,” May 19, 1941. 32. Harris Mayer, “People of the Hill: The Early Days,” Los Alamos Science 28 (2003): 9. 33. Wiener to Von Neumann, March 24, 1945. 34. Goldstine, memo to Mr. Fleming, April 20, 1951. 35. James Pomerene, interview with Nancy Stern, September 26, 1980, Charles Babbage Institute of Oral History, University of Minnesota, Minneapolis, OH 31. 36. Abraham Flexner, “University Patents,” Science 77, no. 1996 (1933): 325. 37. Von Neumann to Strauss, October 24, 1945. 38. Julian Bigelow, “Computer Development at the Institute for Advanced Study,” in Nicholas Metropolis, A History of Computing in the Twentieth Century, ed. J. Howlett and Gian-­Carlo Rota (New York: Academic Press, 1980), 291. 39. Klára von Neumann, “The Computer,” unpublished manuscript, ca. 1963 (courtesy of Marina von Neumann Whitman).

97

george dyson

Bibliography Aspray, William. John von Neumann and the Origins of Modern Computing. Cambridge, MA: MIT Press, 1990. Bigelow, Julian. “Computer Development at the Institute for Advanced Study.” In Nicholas Metropolis, A History of Computing in the Twentieth Century, edited by J. Howlett and Gian-­Carlo Rota, 291–310. New York: Academic Press, 1980. Davis, Watson. “Super-­University for Super-­Scholars.” Science News-­Letter 23, no. 616 (1933): 54–55. Dyson, George. Turing’s Cathedral: The Origins of the Digital Universe. New York: Pantheon, 2012. Flexner, Abraham. I Remember. New York: Simon & Schuster, 1940. —­—­—­. Universities: American, English, German. New York: Oxford University Press, 1930. —­—­—­. “University Patents.” Science 77, no. 1996 (1933): 325. —­—­—­. “The Usefulness of Useless Knowledge.” Harper’s Magazine, October 1939, 544–52. Maass, Herbert H. “Report on the Founding and Early History of the Institute.” Unpublished manuscript, undated (ca. 1955). Mayer, Harris. “People of the Hill: The Early Days.” Los Alamos Science 28 (2003): 4–29. Veblen, Thorstein. The Higher Learning in America. New York: B. W. Huebsch, 1918. von Neumann, Klára. “The Computer.” Unpublished manuscript, ca. 1963. —­—­—­. “Two New Worlds.” Unpublished manuscript, ca. 1963. Whitman, Marina von Neumann. The Martian’s Daughter: A Memoir. Ann Arbor: University of Michigan Press, 2012.

98

cha p t er f i v e

Education and Exceptional Creativity The Decoding of DNA and the Decipherment of Linear B Andrew Robinson

Formal education and educational institutions have long had an uneasy relationship with exceptional creativity and genius—­perhaps most notably in the arts, but also in the sciences. Pioneering figures such as Isaac Newton, Thomas Young, Charles Darwin, Marie Curie, and Albert Einstein had university training, which they found necessary and sometimes stimulating. Yet all of their initial breakthroughs were made while they were working outside of a university, and required them to reject ideas then prevailing in the academy. There are plenty of other examples of this phenomenon, which continues to be important and intriguing, if hard to analyze. Consider the astonishing life of the early twentieth-­century Indian mathematician Srinivasa Ramanujan, whom current mathematicians regard as one of the great mathematicians of all time, in a class with Leonhard Euler and Karl Jacobi, according to Eric Temple Bell’s well-­ respected historical study of mathematicians, Men of Mathematics. In barest outline, Ramanujan, born in 1887 to poor parents, was an impoverished, devout, Brahmin clerk working at the Madras Port Trust, self-­ taught in mathematics and without a university degree, who claimed that his mathematics was inspired by a Hindu goddess, Namagiri. Ramanujan was inclined to say, “An equation for me has no meaning, 99

andrew rob inson

unless it represents a thought of God.”1 Out of desperation at the dearth of appreciation of his theorems by university-­trained mathematicians in India, in 1913 Ramanujan mailed some of them, without proofs, to a leading Cambridge University mathematician (and confirmed atheist) G. H. Hardy. Despite their unfamiliar and highly improbable source, the formulae were so transcendently original that Hardy dragged a reluctant Ramanujan from obscurity to Trinity College, Cambridge, where he collaborated extensively with him, published many joint papers in journals, and demonstrated that he was a mathematical genius. In 1918 Ramanujan was elected the first Indian fellow of Trinity College and of the modern Royal Society. But having fallen mysteriously ill and attempted suicide on the London Underground, he then returned to India to recuperate, still producing major new theorems on his sickbed, and died tragically in 1920 at the age of just thirty-­two. After his death, a dazzled Hardy wrote of Ramanujan, The limitations of his knowledge were as startling as its profundity. . . . His ideas as to what constituted a mathematical proof were of the most shadowy description. All his results, new and old, right or wrong, had been arrived at by a process of mingled argument, intuition, and induction, of which he was entirely unable to give any coherent account.2 Ramanujan’s biographer Robert Kanigel, in his masterly book The Man Who Knew Infinity, writes that “Ramanujan’s life was like the Bible, or Shakespeare—­a rich find of data, lush with ambiguity, that holds up a mirror to ourselves or our age.”3 Kanigel gives four fascinating examples. First, the Indian school system flunked Ramanujan in his teens—­but a few individuals in India sensed his brilliance and rescued him from near-­starvation by getting him a job as a clerk. Second, Hardy recognized Ramanujan’s genius from his 1913 letter—­but drove him so hard in England that he may have hastened his Indian protégé’s death. Third, had Ramanujan received Cambridge-­style mathematical 100

education and excep tional creativit y

training in his early life he might have reached still greater heights—­ but possibly, instead, such training might have stifled his originality. Lastly, Hardy, as an atheist, was convinced that religion had nothing to do with Ramanujan’s intellectual power—­but it is at least plausible that Hindu India’s long-­standing mystical attraction to the concept of the infinite was a vital source of Ramanujan’s creativity. “Was Ramanujan’s life a tragedy of unfulfilled promise? Or did his five years in Cambridge redeem it?” asks Kanigel. “In each case, the evidence [leaves] ample room to see it either way.”4 Ramanujan’s experience of formal education can hardly be called typical. But neither should it be dismissed as unique and irrelevant. Elements of it are to be found in the education of all exceptionally creative figures. While some geniuses may have enjoyed and benefited from their school days, the majority did not. (A handful, such as Wolfgang Amadeus Mozart and the philosopher John Stuart Mill, never attended a school.) Many never went to university or failed to distinguish themselves there. Only a small minority became highly educated by taking a doctoral degree. While some important creative breakthroughs have emerged from colleges and universities, particularly in the sciences, on the whole they have not. Mark Twain’s quip remains pertinent: “I have never let my schooling interfere with my education.” So does that of the great photographer Henri Cartier-­Bresson, who failed his school leaving exam, when refusing an honorary doctorate decades later: “What do you think I’m a professor of? The little finger?”5 More prosaically, the nineteenth-­century polymath Thomas Young—­physicist, physician, and Egyptologist, among several other things—­stated, after studying at three famous universities (Edinburgh, Göttingen, and Cambridge), “Masters and mistresses are very necessary to compensate for want of inclination and exertion: but whoever would arrive at excellence must be self-­taught.”6 Newton, Darwin, Einstein, and many others emphatically agreed. In 2000–2002, the BBC broadcaster and arts administrator John Tusa interviewed on radio about a dozen figures well known in the 101

andrew rob inson

arts concerning their creative process, and later published the conversations in full in his collection On Creativity. They were architect Nicholas Grimshaw; artists Frank Auerbach, Anthony Caro, Howard Hodgkin, and Paula Rego; photographer Eve Arnold and filmmaker Milos Forman; composers Harrison Birtwhistle, Elliott Carter, and Gyorgy Ligeti; writers Tony Harrison and Muriel Spark; and art critic and curator David Sylvester. Their formal education varied greatly, from ordinary schooling in the case of Arnold and Sylvester to Carter’s doctoral training in music and subsequent academic appointments. There was nothing in what they said of their careers to indicate that a basic education, let alone a university degree, is a requirement in order to be a creative person in the arts, Tusa concluded.7 A much larger sample of the exceptionally creative—­nearly one hundred individuals—­were interviewed by University of Chicago psychologist Mihaly Csikszentmihalyi in the 1990s. Unlike Tusa’s subjects, Csikszentmihalyi’s interviewees included, as well as those eminent in the arts, many scientists, mostly working in universities, some of whom were Nobel laureates. Schooldays were rarely mentioned by any of the interviewees as a source of inspiration. In some cases, they remembered extracurricular school activities—­for example, the literary prizes won by writer Robertson Davies or the mathematical prize won in a competition by physicist John Bardeen (later the world’s only double Nobel laureate in physics). Some inspiring individual teachers were also recalled, though chiefly by the scientists. But overall, Csikszentmihalyi was surprised by how many of the interviewees had no memory of a special relationship with a teacher at school. “It is quite strange how little effect school—­even high school—­seems to have had on the lives of creative people. Often one senses that, if anything, school threatened to extinguish the interest and curiosity that the child had discovered outside its walls,” writes Csikszentmihalyi in his study Creativity: Flow and the Psychology of Discovery and Invention. “How much did schools contribute to the accomplishments of Einstein, or Picasso, or T. S. Eliot? The record is rather grim, espe102

education and excep tional creativit y

cially considering how much effort, how many resources, and how many hopes go into our formal educational system.”8 Leaving school and moving on to higher education and professional training, one finds the pattern of experiences less clear-­cut. Some exceptionally creative achievers receive no further formal education after leaving school, but this has become relatively unusual in recent decades, with the worldwide expansion in higher education, and almost inconceivable for scientists. Among Tusa’s sample of twentieth-­century creators (which excludes scientists), three of them—­Arnold, Spark, and Sylvester—­received no institutional training in their field, and indeed had no further formal education. Only three of them—­Carter, Caro, and Harrison—­took university degrees; Carter alone went on to do a doctorate. Auerbach, Grimshaw, Hodgkin, and Rego went to art schools. Birtwhistle and Ligeti trained at academies of music. Forman went to film school. The saga of Einstein’s physics doctorate is revealing about institutional training and creativity. In the summer of 1900, Einstein graduated from the Swiss Polytechnic, but was not offered an assistant’s post in the physics department, because of his spotty attendance record at lectures and his critical attitude to the professors, leaving him in an uncomfortable situation of financial and professional uncertainty. During 1901, unable to interest professors at other institutions in employing an unknown, he decided that he needed a doctorate to make an academic career, and submitted a thesis to the University of Zurich. To his dismay, it was rejected. Then, in the summer of 1902, he at long last landed his first full-­time job, at the Swiss Patent Office in Bern. The idea of a doctorate was put aside. In early 1903 Einstein told a close friend that he had abandoned the plan, “as [it] doesn’t help me much and the whole comedy has begun to bore me.”9 But in the summer of 1905, his annus mirabilis, after completing his theory of special relativity, he revived the doctorate plan for the same reason as before: he needed a doctoral degree to get out of the patent office and into a university. Second time around, he submitted his paper on special relativity to 103

andrew rob inson

the University of Zurich—­­and it, too, was rejected! At least this is what happened according to his sister, who was close to her brother: she wrote that relativity “seemed a little uncanny to the decision-­­making professors.”10 There is no proof, although both Einstein’s choice of this paper and the professors’ skeptical reaction to it seem plausible, since special relativity was clearly important enough for a thesis but had not yet been vetted and published by the scientific establishment (and would remain intensely controversial after its publication in 1905, rejected by the Nobel physics committee of the Swedish Academy for many years). For whatever reason, in the end Einstein finally selected some less challenging, though still significant, work he had completed in April 1905 just before special relativity—­a paper on how to determine the true size of molecules in liquids, respectably based on experimental data rather than relying on purely theoretical arguments like relativity—­and resubmitted the thesis. According to him, perhaps speaking half in jest, the Zurich professors informed him that the manuscript was too short, so Einstein added one sentence. Within days, this more orthodox paper was accepted, and by the end of July 1905 he could finally call himself “Herr Doktor Einstein.” Only later was a small but important mistake discovered in the thesis, which Einstein duly corrected in print in 1906, and further refined in 1910, as better experimental data became available. The point is, of course, that academia has an inherent tendency to ignore or reject highly original work that does not fit the existing paradigm. Einstein was self-­evidently just as original and creative in 1905 without a PhD as with a PhD. To get one, he seems to have been encouraged to show less, rather than more, originality. Might it be that too much training and education can be a handicap for the truly creative? In 1984, psychologist Dean Keith Simonton studied the educational level of more than three hundred exceptionally creative individuals born from 1450 to 1850—­before the introduction of the recognizably modern university system—­post-­Darwin, but pre-­Einstein, so to

104

education and excep tional creativit y

speak. Simonton discovered that the top creators—­including Ludwig van Beethoven, Galileo Galilei, Leonardo da Vinci, Mozart, and Rembrandt van Rijn—­had attained an educational level equivalent to approximately halfway through a modern undergraduate program. Those with more (or less) education than this had a lower level of creative accomplishment, generally speaking.11 Not too much weight should be put on Simonton’s discovery, given the difficulty of estimating the educational level of some highly creative historical individuals, and of comparing levels of education in different societies at different periods. However, the finding is supported by the regularity with which highly creative individuals lose interest in academic work during their undergraduate degree course and choose to focus instead on what fascinates them. Some even drop out of university to pursue their hunches, such as the computer scientist Bill Gates at Harvard University in the 1970s, who left Harvard in order to establish Micro-­Soft (the original name of Microsoft), and the film director Satya­jit Ray, who dropped out of the art school at Visva-­Bharati University in India in the 1940s to become a commercial artist in Calcutta. Simonton’s finding may also provide a clue as to why, in higher education, the postwar increase in the number of PhDs has not led to more exceptionally creative research—­if Simonton is correct that the optimal education for exceptional creativity does not require a PhD. In the sciences, the twentieth-­century expansion of higher education at the doctoral level produced a proliferation of new research specialisms and new journals catering to these specialisms. “Since 1945, the number of scientific papers and journals in highly industrialized societies—­ particularly in the United States—­has risen almost exponentially, while the proportion of the workforce in research and development and the percentage of gross national product devoted to it have grown more modestly,” the sociologist of science J. Rogers Hollingsworth wrote in Nature in 2008, after spending several decades studying innovation in different societies. “Yet the rate at which truly creative work emerges

105

andrew rob inson

has remained relatively constant. In terms of the scale of research efforts to make major scientific breakthroughs, there are diminishing returns.”12 A more likely explanation of this fact, however, is that in contemporary society, exceptionally creative scientists and artists differ in the periods of training they require, because of the changed nature of the scientific enterprise, as compared to that of the late nineteenth century and before. Exceptionally creative artists do not require doctoral training now any more than they did in earlier times—­but this is not true of their equivalents in science, who must master a greater breadth of knowledge and techniques before they can reach the frontier of their discipline and make a new discovery. Scientists also need to be much better students than artists, in terms of their performance in school and university examinations. Simonton notes that “the contrast in academic performance between scientists and artists appears to reflect the comparative degree of constraint that must be imposed on the creative process in the sciences versus the arts.”13 Whether this fact has the tendency to squeeze potential Darwins and Einsteins out of the system in favor of the merely productive academic scientist is an endlessly discussed subject, to which no one has yet given a satisfactory answer. What is generally accepted, though, is that the huge growth in size and competitiveness of higher education in the second half of the twentieth century and after did not increase the number of exceptionally creative scientists. Two famous breakthroughs from this post-­1945 period—­one in the sciences (biochemistry), the other in the humanities (archaeology and linguistics)—­may provide suggestive clues toward solving the puzzle of the relationship between individuals, institutions, and exceptional creativity. The first is the decoding of the structure of the macro-­molecule DNA (deoxyribonucleic acid), the second the decipherment of the ancient script Linear B. By an odd coincidence, both breakthroughs occurred at almost the same time, in the early 1950s.

106

education and excep tional creativit y

DNA vs. Linear B The late spring of 1953 was an extraordinary time for human endeavor. In Asia, on May 29, two mountain climbers, the New Zealander Edmund Hillary and the Nepalese Tenzing Norgay, made the first successful ascent of Mount Everest, the world’s highest peak. In Europe, two scientists working at the Cavendish Laboratory of Cambridge University, the Englishman Francis Crick and the American James D. Watson, announced in the journal Nature their discovery of the double-­helix structure of DNA, the basic molecule of life and heredity. This was at the end of April 1953. In a second paper, published in May, Crick and Watson speculatively outlined the far-­ reaching genetic implications of their discovery. At the same moment, in London, the decipherment of Europe’s earliest readable writing, dating from the middle of the second millennium BC—­older than Homer—­was announced. For many years, the architect Michael Ventris had been working alone at home in his spare time on the problem of reading “Minoan Linear Script B.” Linear B was written on clay tablets from the Palace of Minos at ancient Knossos in Crete. It was first discovered at Knossos in 1900 by the archaeologist Sir Arthur Evans. Evans had tried to decipher it for decades, but he had failed. Ventris first went public with a tentative decipherment in July 1952. Some classical specialists, notably John Chadwick of Cambridge University, hailed Ventris’s achievement, but others were extremely skeptical of his proposed readings. What was needed to prove the decipherment were some fresh clay tablets, unknown to Ventris, which could be read convincingly by others using Ventris’s proposed system. In May 1953 Ventris received a letter from the American archaeologist Carl Blegen, who was digging in mainland Greece. Blegen had used Ventris’s unproven decipherment to read a recently excavated clay tablet, with results as pivotal as the Rosetta Stone had been for

107

andrew rob inson

the decipherment of the Egyptian hieroglyphs in the 1820s. At long last, half a century after Evans discovered them, the tantalizing signs in the clay began to yield up their meaning: Linear B could be read in an archaic dialect of ancient Greek. Britain’s leading newspaper, The Times, ignored the discovery of the double helix (unthinkable today!) but quickly ran a leader article, in June 1953, about the climbing of Mount Everest titled “Men and Mountains,” which hailed “a story that will live as long as courage and comradeship are honoured.” Immediately below it was a second leader about Linear B, “On the Threshold?” which spoke of the potentially imminent revelation of an ancient language and culture predating the Trojan War, “as distant from the Greek of Homer as is the English of Chaucer from that which we speak today.”14 Ventris’s decipherment was quickly dubbed “the Everest of Greek archaeology.”15 The story of how DNA was decoded is a celebrated one, thanks partly to Watson’s classic account, The Double Helix, first published in 1968, which formed the basis for a BBC television drama, Life Story (1987), followed by Crick’s autobiography What Mad Pursuit (1989), and exhaustive studies by Horace Freeland Judson (1979; exp. ed., 1996) and Robert Olby (2009).16 How Linear B was deciphered is also well known, if not so quite so familiar, thanks to The Decipherment of Linear B (1958), the account written by Ventris’s key collaborator, Chadwick, supplemented by my own short biography of Ventris, The Man Who Deciphered Linear B (2002), which formed the basis for a BBC television drama-­documentary, A Very English Genius (2002). The curious coincidence in time of these two stories of decoding has been passingly noted by several scholars, such as Maurice Pope in his account of the Linear B story in The Story of Decipherment. However, no one has attempted a comparison, which would require considerable knowledge of chemistry, physics, and genetics, in addition to archaeology, epigraphy, and linguistics. In this brief first attempt, we shall merely pick out some significant resemblances, and also some differences, between the two stories, relating mainly to the individual 108

education and excep tional creativit y

contributions, motivations, and working environments of Crick and Watson, Ventris and Chadwick, and some other significant contributors, who were distinguished scientists and scholars at leading research institutions in the United Kingdom and the United States. Watson, at the beginning of The Double Helix, writes that the decoding of DNA, for all its great complexity, was “chiefly . . . a matter of five people.”17 The dramatis personae were Maurice Wilkins, Rosalind Franklin, Linus Pauling, Francis Crick, and Watson himself. Wilkins (1916–2004), who started out as a physicist, was a biochemist and molecular biologist at King’s College, London, and was experienced with X-­ray diffraction, as was his colleague Rosalind Franklin (1920–1958)—­although the two of them notoriously worked at King’s almost independently of each other. The older scientist Linus Pauling (1901–1994) was a chemist at the California Institute of Technology (Caltech), who by 1953 was regarded as the world’s leading expert on chemical bonding in complex biological macromolecules. Crick (1916–2004) was originally trained as a physicist (like Wilkins) at the University of London, but in 1949 moved to Cambridge and began working in biophysics at the Cavendish Laboratory in order to gain a belated PhD. Watson (born 1928), already with a PhD in biology from Indiana University, but small knowledge of either chemistry or physics, arrived at the Cavendish in 1951, on a postdoctoral fellowship intended to expose him to European biochemistry. The Linear B decipherment story happens also to involve five key individuals, three of them British and two American, as with the DNA story. They were Emmett Bennett Jr., Alice Kober, Sir John Myres, Michael Ventris, and John Chadwick. Bennett (1918–2011) was an epigraphist, with wartime experience of cryptography, who had written a doctorate on Linear B under the archaeologist Carl Blegen at the University of Cincinnati in the late 1940s; by 1953 Bennett had moved to Yale University. Kober (1906– 1950) was a classicist with a PhD in Greek literature from Columbia University, who had developed a consuming interest in Linear B in 109

andrew rob inson

1935 after reading volume 4 of Arthur Evans’s classic work The Palace of Minos; at the same time she joined the staff of Brooklyn College in New York as an assistant professor of classics. The aging archaeologist Sir John Myres (1869–1954) was professor of ancient history at Oxford University until 1939 and widely considered a leading authority on the ancient Greeks; in addition, since the death of his longtime friend Evans in 1941, Myres had been the custodian and editor of the Linear B tablets, which he eventually published (as Scripta Minoa) in 1952, a few months before Ventris’s decipherment. Ventris (1922–1956) never went to university, but instead trained as an architect at the Architectural Association School in London in the 1940s before beginning to practice architecture professionally; his passion for deciphering Linear B began in 1936 when he was still a schoolboy. Chadwick (1920–1998) had a first degree in classics from Cambridge but no PhD; after wartime work as a cryptographer (like Bennett) and work in Oxford on the staff of the Oxford Latin Dictionary, Chadwick became a lecturer in classics at Cambridge in 1952, the year he met Ventris, and remained in Cambridge for the rest of his life. These five individuals had roles in the Linear B decipherment story that were in significant ways similar to the roles of the quintuplet in the DNA double-­helix story. Bennett may be compared to Wilkins, Kober to Franklin, Myres to Pauling, Ventris to Crick, Chadwick to Watson. This is not to suggest that the personalities in each pairing were necessarily similar. On the whole, they were not. Crick, for example, was a gregarious extrovert whose research required frequent conversation with a regular collaborator, while Ventris was by nature a loner, to the extent of having a large circle of acquaintances but no intimate friends who shared his fascination with Linear B. “From my first day in the [Cavendish] lab I knew I would not leave Cambridge for a long time. Departing would be idiocy, for I had immediately discovered the fun of talking to Francis Crick,” writes Watson.18 Ventris’s collaborator Chadwick, by contrast, communicated with Ventris mainly through

110

education and excep tional creativit y

immensely detailed letters (occasionally written in Linear B!), and was personally distant from him. Rather than their personalities, it is the intellectual contributions of the five pairs that show surprising parallels. Crick, Watson, and Wilkins won the Nobel Prize in physiology or medicine for their work in 1962. Had Nobel Prizes in archaeology existed, one would surely have been awarded to Ventris, Chadwick, and Bennett in the 1950s. Let us try to compare and summarize their respective key contributions, beginning with Wilkins and Bennett, who laid the groundwork for the two breakthroughs. Wilkins provided the X-­ray diffraction images of DNA that permitted Crick and Watson to build and modify their theoretical models of the molecule. Bennett provided a painstaking analysis of many of the Linear B tablets, which established the signary of the writing system—­in other words, Bennett classified the mysterious signs into logical categories. This work formed the basis for Ventris’s manipulation of the signs. Both Wilkins and Bennett had structural insights, too—­for example, the pairing of bases in DNA, and the numerical system in Linear B—­some of which they discussed with their “rivals”; but neither man was willing to risk being as theoretically daring as his rivals. As Wilkins admitted in his autobiography, “I was not as audacious in my thinking as some of the other scientists on the trail of the DNA structure.”19 Bennett, confronted with Ventris’s tentative decipherment in 1953, was impressed but nonetheless responded in public with a “fine set of cautious, non-­committal phrases,” as he privately confessed to Ventris.20 Franklin also provided X-­ray diffraction images to Crick and Watson, if inadvertently. (Some of her key data reached Crick and Watson in Cambridge in the form of a privately circulated Medical Research Council committee report on the King’s College laboratory, via their Cavendish colleague, the molecular biologist Max Perutz.) But unlike Wilkins, Franklin was a scathing critic of the Cambridge duo’s attempt to model the structure of DNA theoretically, without

111

andrew rob inson

doing any experimental work. Moreover, Franklin initially convinced herself from her X-­ray images that the structure could not be helical, although later she changed her mind. “She had been well on the way to discovering the structure herself, though anxious, perhaps overanxious, to be quite certain of her results before publishing them or experimenting with models.”21 Kober, similarly, had little sympathy for Ventris’s attempt to read Linear B by introducing hypotheses about its underlying language. She curtly informed Ventris in 1950, “In my opinion [your approach] represents a step in the wrong direction and is a complete waste of time.”22 Like Franklin, Kober believed the solution would emerge from a cautious search for patterns in the data, and in the 1940s she published analyses in the American Journal of Archaeology that supplied a method of attack to Ventris. One of her observations published in 1946, of a striking pattern in certain Knossos inscriptions, dubbed “triplets” by Ventris, became the key to his decipherment of Linear B as Greek—­somewhat like Franklin’s famous unpublished X-­shaped diffraction image of 1952, which convinced Watson and Crick that DNA was helical. In Kober’s last important paper before her death, she concluded, “When we have the facts, certain conclusions will be almost inevitable. Until we have them, no conclusions are possible”23—­a remark Ventris rightly regarded as too pessimistic. Pauling, brilliant chemist that he undoubtedly was, proposed a totally erroneous chemical structure of DNA in early 1953 without studying any high-­quality X-­ray diffraction images, using only his theoretical models of the molecule based on his unrivaled knowledge of chemical bonding. His three-­chain helix with the sugar-­phosphate backbones in the center (instead of on the outside) and the phosphate groups electrically neutral (instead of ionized), was chemically impossible, since DNA was known to be an acid containing hydrogen ions. Myres, too, misinterpreted his data, largely as a result of remaining loyal to some incorrect notions inherited from Linear B’s discoverer Evans. Instead of accepting that many Linear B signs were phonetic symbols (standing for vowels and syllables), Myres read them as pictograms. For example, 112

education and excep tional creativit y

the frequent Linear B sign that resembles a Minoan double-­axe, which Ventris read as the vowel a, Myres read (following Evans) as a pictogram standing for the word “double-­axe.” Myres was also unwilling to acknowledge the possibility that the language behind Linear B might be Greek (rather than “Minoan”): in 1950 Myres told Ventris, “[The language] is not clearly related to any known language—­not enough to aid decipherment.”24 Thus both Pauling and Myres, despite having a head start on the problem by virtue of their age and experience (in biochemistry and ancient Greek, respectively) made little direct contribution to its solution. (It should be noted, though, that Myres’s experience lay far more in the history, archaeology, anthropology, and geography of ancient Greece, rather than in epigraphy.) Crick and Ventris, by general agreement, had the most agile minds and the most wide-­ranging interests among these two groups. Neither responded with enthusiasm to formal education and conventional career paths: indeed Crick managed only an average second-­class degree at University College London and failed to complete his PhD in physics, while Ventris went to straight to architecture school rather than studying classics at Oxford, as he had at first considered doing. Both men had the ability to concentrate deeply for long periods on a problem that interested them. And both had markedly visual imaginations, which enabled Crick to deduce chemical structures from X-­ray images by intuition as much as calculation, and Ventris to sift and compare numerous tablet inscriptions mentally. But whereas Crick felt satisfied to devote most of his career (though not its later stages) to one field, molecular biology, Ventris was anxious to avoid devoting his career to Linear B studies, after he had laid the basis for the decipherment in 1952–1953. In 1956, three years after his decipherment was accepted (and just months before his death in a car accident), Ventris made plans to return full-­time to architecture and abandon Linear B entirely to a brand-­new group of academic specialists. Watson and Chadwick, though very dissimilar as personalities, were by temperament inclined to stick with the two fields—­molecular 113

andrew rob inson

biology and Linear B studies, respectively—­they had helped to found. Both acted for the most part as sounding boards for their collaborators, although Watson unquestionably had a more original and adventurous mind than Chadwick. On the other hand, Chadwick had a relevant specialist knowledge (of early Greek) that Watson could not claim in chemistry, despite his helpful background in biology. Neither regarded himself as being at quite the same intellectual level as his collaborator; in fact they shared a feeling of profound respect for the other half of the partnership. As Watson wrote in The Double Helix, “Already [Crick] is much talked about, usually with reverence, and some day he may be considered in the category of Rutherford or Bohr.”25 Chadwick, for his part, once confessed to Ventris that he was the “pedestrian” Dr. Watson to the master decipherer’s Sherlock Holmes.26 In The Decipherment of Linear B, Chadwick honestly stated: Ventris was able to discover among the bewildering variety of the mysterious signs, patterns and regularities which betrayed the underlying structure. It is this quality, the power of seeing order in apparent confusion, that has marked the work of all great men.27 As for the working environments involved in these two breakthroughs, the parallel stories shed at least some light on how leading research institutions engender—­or do not engender—­creative atmospheres for individuals. Caltech (through Pauling), Cambridge University (through Chadwick, Crick, and Watson), King’s College London (through Franklin and Wilkins), and Oxford University (through Myres) were each important. (Yale University was perhaps less significant in the case of Bennett, as was Brooklyn College for Kober.) The atmosphere at both Caltech and Oxford seems to have induced a measure of complacency in Pauling and Myres, where each was regarded in this period as a grand authority in his field. As Pauling

114

education and excep tional creativit y

later admitted to his wife, “I guess that I always thought that the DNA structure was mine to solve, and therefore I didn’t pursue it aggressively enough.”28 Pauling’s biographer Thomas Hager accuses Pauling of “hurry and hubris,” as a result of his earlier extraordinary successes in biochemistry.29 At Oxford, where the Linear B tablets came to rest in the university’s Ashmolean Museum, an institution where Arthur Evans had formerly been the keeper, the long shadow of Sir Arthur—­ going back for Myres to the 1890s, when he and Evans had toured Crete looking for ancient inscriptions—­probably encouraged Myres to downgrade Kober’s sound analysis. She persistently attempted to assist Myres from 1946, but found herself stymied by the prestige that Myres accorded to Evans’s wrong ideas. At the Cavendish Laboratory in Cambridge, in contrast, enlightened leadership by the Nobel laureate Sir Lawrence Bragg, with the support of John Kendrew and Max Perutz, enabled the unorthodox Crick and Watson partnership to flourish, despite the ups and downs dramatically chronicled in The Double Helix, which included Bragg’s near-­dismissal of the ebulliently critical Crick. At King’s College London, relationships between Wilkins and Franklin were unproductive, however, though not as a result of complacency as much as a clash of personalities exacerbated by internal politics. The laboratory director John Randall (a physicist distinguished for his wartime work on radar) did not nurture a working relationship between Wilkins and Franklin, who quickly became distrustful of each other. “Randall sowed fatal confusion between Wilkins and Franklin, which ensured that they never collaborated as Watson and Crick did,” notes Matt Ridley in his biography of Crick.30 “The clash, because of the importance of the work they were involved in, has become appallingly famous,” writes Jenifer Glynn in her memoir of her sister Rosalind.31 Nonetheless, the London group’s experimental data made possible the theoretical breakthrough in Cambridge. As Bragg diplomatically remarked in his foreword to Watson’s The Double Helix,

115

andrew rob inson

It is a source of deep satisfaction to all intimately concerned that, in the award of the Nobel prize in 1962, due recognition was given to the long, patient investigation by Wilkins at King’s College (London) as well as to the brilliant and rapid final solution by Crick and Watson at Cambridge.32 With the Linear B story, one cannot but be struck by the fact that Ventris was an independent agent, untrammeled by institutions. From 1949 to 1953, the key period in his decipherment, he was free of any institutional attachments relating to Linear B; he and his wife had a private income—­substantial enough for Ventris to take periods of time off from paid employment as an architect. Yet Ventris was extraordinarily open to collaboration and consultation with other scholars—­rather like the independent scientist Darwin’s constant communication with fellow scientists and experts in the previous century. As an architecture student in the 1946–1948 period, Ventris took a special interest in “group working,” which was then something of a buzz phrase at the Architectural Association School. “It is the privilege of individual genius to follow no system beyond a creative intuition; but in group working some minimum of method is essential,” Ventris wrote at this time. He even developed three “golden rules” for group working.33 And he undoubtedly carried the group working ethos in architecture into his study of Linear B. At the end of 1949, purely on his own initiative, he sent out a questionnaire to every scholar in the world whom he knew to be interested in the ancient script, translated and collated their responses, published them in what soon became known as the Mid-­Century Report,34 and then circulated this typed report gratis to anyone interested in the Linear B problem. Thereafter, from 1950 to 1952, Ventris remained in close touch with active researchers, notably Bennett in the United States, and eagerly embraced collaboration with Chadwick after making his tentative announcement of the decipherment in July 1952. A wide range of scholars apart from Ventris and Chadwick willingly contributed to 116

education and excep tional creativit y

the writing of Documents in Mycenaean Greek, the Ventris/Chadwick “bible” of Linear B studies, published by Cambridge University Press in 1956. (Cambridge University itself, ironically—­given its generous support for Crick and Watson—­took much longer to embrace the study of Linear B, which first became established at University College London, in the mid-­1950s.) Thus, the DNA and Linear B stories do not point to any straightforward link between institutional support and creativity. The vigorous give-­and-­take at the Cavendish clearly encouraged creative solutions to the decoding of DNA. As Wilkins admitted in his book The Third Man of the Double Helix, Francis and Jim . . . argued very frankly, sometimes even taking their friendship and collaboration to its limits. That was a good way to be truly creative. In contrast, in our lab we suffered from a sad lack of openness, and would walk away from confrontation leaving matters unresolved, rather than facing up to our differences; but Francis and Jim’s open discussions were uninhibited dialogue, involving very close attention to what was said.35 The lack of this give-­and-­take at King’s College London discouraged creative solutions but, on the other hand permitted accumulation of crucial data. Linear B was deciphered essentially without any institutional support. In both stories, Caltech and Oxford, despite having major resources and expertise, failed to grasp the opportunities. Perhaps the only overall conclusion one can draw is that the most creative figures in these two intellectual triumphs, Crick and Ventris, were the least institutionalized of the ten individuals under consideration. In addition, both men conform to the general attitude of geniuses to their schooldays mentioned at the start of this chapter. Crick and Ventris were above average at their boarding schools, but not excellent academically; Crick did better than Ventris, but failed to win the 117

andrew rob inson

expected open scholarship to either an Oxford or a Cambridge college, while Ventris left school without winning any prizes. Crick admired some of his science teaching but became quickly bored with it, though he did have a fond memory of a teacher at his first school. Ventris derived little inspiration from his school’s teaching, though he, too, had a fond memory of a teacher who taught him classics and accidentally introduced him to Linear B on a school expedition to a London exhibition on the Minoan world in 1936. Neither Crick nor Ventris became a prefect. Nor were they much drawn to group activities, such as team sports. On the whole, they were somewhat detached from their schools and liked to be autodidacts. Crick preferred reading the works of great novelists and poets to physics textbooks. Ventris was already preoccupied with the puzzle of the Minoan decipherment. Like his great predecessor Jean-­François Champollion who deciphered the Egyptian hieroglyphs in the 1820s, Ventris even worked secretly at night at his boarding school—­under the bedclothes by the light of a flashlight after official lights-­out, as one of his fellow boarders amusingly recalled.36 Can formal education ever instill this kind of dedication leading to exceptional creativity? Not on the evidence of past geniuses. The psychologist H. J. Eysenck, upon retiring from his academic institution in London, offered the following parting shot at the academic system in his study Genius: The Natural History of Creativity: The best service we can do to creativity is to let it bloom unhindered, to remove all impediments, and cherish it whenever and wherever we encounter it. We probably cannot train it, but we can prevent it from being suffocated by rules, regulations, and envious mediocrity.37 Unfortunately, signally few educational institutions or national governments, for all their honorable efforts and claims to foster excellence and innovation, take this lesson to heart and manage to put it into practice in schools and universities. 118

education and excep tional creativit y

Acknowledgments I would like to thank Thomas G. Palaima, a long-­standing expert on the Minoan scripts, for his comments on the first draft of this paper.

Notes 1. S. R. Ranganathan, Ramanujan: The Man and the Mathematician (Bombay: Asia Publishing House, 1967), 88. 2. G. H. Hardy, Collected Papers of G. H. Hardy (Oxford: Clarendon Press, 1979), 714. 3. Robert Kanigel, The Man Who Knew Infinity: A Life of the Genius Ramanujan (London: Scribner, 1991), 357. 4. Ibid. 5. Quoted in Pierre Assouline, Henri Cartier-­Bresson: A Biography (London: Thames & Hudson, 2005), 253. 6. Quoted in Andrew Robinson, The Last Man Who Knew Everything: Thomas Young (Oxford: Oneworld, 2006), 15. 7. John Tusa, On Creativity: Interviews Exploring the Process (London: Methuen, 2003). See also Andrew Robinson, Sudden Genius? The Gradual Path to Creative Breakthroughs (Oxford: Oxford University Press, 2010), especially chap. 17. 8. Mihaly Csikszentmihalyi, Creativity: Flow and the Psychology of Discovery and Invention (New York: HarperCollins, 1996), 173. 9. Quoted in Albrecht Fölsing, Albert Einstein: A Biography (London: Viking, 1997), 123. 10. Ibid. 11. Dean Keith Simonton, Genius, Creativity, and Leadership: Historiometric Inquiries (Cambridge, MA: Harvard University Press, 1984). 12. J. Rogers Hollingsworth, Karl H. Müller, and Ellen Jane Hollingsworth, “The End of the Science Superpowers,” Nature 454 (2008): 412. 13. Dean Keith Simonton, Creativity in Science: Chance, Logic, Genius, and Zeitgeist (Cambridge: Cambridge University Press, 2004), 127–28. 14. The Times, June 25, 1953. 15. Quoted in Andrew Robinson, The Man Who Deciphered Linear B: The Story of Michael Ventris (London: Thames & Hudson, 2002), 122. 16. Horace Freeland Judson, The Eighth Day of Creation: Makers of the Revolution in Biology, exp. ed. (Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 1996); Robert Olby, Francis Crick: Hunter of Life’s Secrets (Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 2009).

119

andrew rob inson 17. James D. Watson, The Double Helix: A Personal Account of the Discovery of the Structure of DNA (London: Weidenfeld and Nicolson, 1997), 18. 18. Ibid., 46. 19. Maurice Wilkins, The Third Man of the Double Helix: The Autobiography of Maurice Wilkins (Oxford: Oxford University Press, 2003), 119. 20. Quoted in Robinson, The Man Who Deciphered Linear B, 116. 21. Jenifer Glynn, My Sister Rosalind Franklin (Oxford: Oxford University Press, 2012), 127. 22. Kober to Ventris, February 20, 1950, in Michael Ventris, Work Notes on Minoan Language Research and Other Unedited Papers, ed. Anna Sacconi (Rome: Edizioni dell’Ateneo, 1988), 67. 23. Alice E. Kober, “The Minoan Scripts: Fact and Theory,” American Journal of Archaeology 52 (1948): 103. Kober’s life and work are the subject of a notable forthcoming biography, The Riddle of the Labyrinth by Margalit Fox. 24. Myres to Ventris, December 25, 1949, in Ventris, Work Notes on Minoan Language Research, 71. 25. Watson, Double Helix, 19. 26. Quoted in Robinson, The Man Who Deciphered Linear B, 14. 27. Chadwick, The Decipherment of Linear B, 4. 28. Quoted in a letter from Jim Lake, “Why Pauling Didn’t Solve the Structure of DNA,” Nature 409 (2001): 558. 29. Thomas Hager, Force of Nature: The Life of Linus Pauling (New York: Simon and Schuster, 1995), 430. 30. Matt Ridley, Francis Crick: Discoverer of the Genetic Code (London: Harper Press, 2006), 75. 31. Glynn, My Sister Rosalind Franklin, 120. 32. Foreword to Watson, Double Helix, 9. 33. Quoted in Robinson, The Man Who Deciphered Linear B, 50–51. 34. “The Languages of the Minoan and Mycenaean Civilizations, New Year 1950,” in Ventris, Work Notes on Minoan Language Research, 31–132. 35. Wilkins, The Third Man of the Double Helix, 225–26. 36. Robinson, The Man Who Deciphered Linear B, 20; and Andrew Robinson, Cracking the Egyptian Code: The Revolutionary Life of Jean-­François Champollion (London: Thames & Hudson, 2012), 45. Crick’s schooldays are covered in Olby, Francis Crick, 31–38. 37. H. J. Eysenck, Genius: The Natural History of Creativity (Cambridge: Cambridge University Press, 1995), 288.

Bibliography Assouline, Pierre. Henri Cartier-­Bresson: A Biography. London: Thames & Hudson, 2005.

120

education and excep tional creativit y Chadwick, John. The Decipherment of Linear B. Cambridge: Cambridge University Press, 1958. Crick, Francis. What Mad Pursuit: A Personal View of Scientific Discovery. London: Weidenfeld and Nicolson, 1989. Csikszentmihalyi, Mihaly. Creativity: Flow and the Psychology of Discovery and Invention. New York: HarperCollins, 1996. Eysenck, H. J. Genius: The Natural History of Creativity. Cambridge: Cambridge University Press, 1995. Fölsing, Albrecht. Albert Einstein: A Biography. London: Viking, 1997. Fox, Margalit. The Riddle of the Labyrinth: The Quest to Crack an Ancient Secret Code. New York: Ecco Press/HarperCollins, 2013. Glynn, Jenifer. My Sister Rosalind Franklin. Oxford: Oxford University Press, 2012. Hager, Thomas. Force of Nature: The Life of Linus Pauling. New York: Simon and Schuster, 1995. Hardy, G. H. Collected Papers of G. H. Hardy. Oxford: Clarendon Press, 1979. Hollingsworth, J. Rogers, Karl H. Müller, and Ellen Jane Hollingsworth. “The End of the Science Superpowers.” Nature 454 (2008): 412–13. Judson, Horace Freeland. The Eighth Day of Creation: Makers of the Revolution in Biology, expanded edition. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 1996. Kanigel, Robert. The Man Who Knew Infinity: A Life of the Genius Ramanujan. London: Scribner, 1991. Kober, Alice E. “The Minoan Scripts: Fact and Theory.” American Journal of Archaeology 52 (1948): 82–103. Lake, Jim. “Why Pauling Didn’t Solve the Structure of DNA,” Nature 409 (2001): 558. Olby, Robert. Francis Crick: Hunter of Life’s Secrets. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 2009. Pope, Maurice. The Story of Decipherment: From Egyptian Hieroglyphs to Maya Script, revised edition. London: Thames & Hudson, 1999. Ranganathan, S. R. Ramanujan: The Man and the Mathematician. Bombay: Asia Publishing House, 1967. Ridley, Matt. Francis Crick: Discoverer of the Genetic Code. London: Harper Press, 2006. Robinson, Andrew. Cracking the Egyptian Code: The Revolutionary Life of Jean-­ François Champollion. London: Thames & Hudson, 2012. ———. The Last Man Who Knew Everything: Thomas Young. Oxford: Oneworld, 2006. ———. The Man Who Deciphered Linear B: The Story of Michael Ventris. London: Thames & Hudson, 2002. ———. Sudden Genius? The Gradual Path to Creative Breakthroughs. Oxford: Oxford University Press, 2010.

121

andrew rob inson Simonton, Dean Keith. Creativity in Science: Chance, Logic, Genius, and Zeitgeist. Cambridge: Cambridge University Press, 2004. ———. Genius, Creativity and Leadership: Historiometric Inquiries. Cambridge, MA: Harvard University Press, 1984. Tusa, John. On Creativity: Interviews Exploring the Process. London: Methuen, 2003. Ventris, Michael. Work Notes on Minoan Language Research and Other Unedited Papers, edited by Anna Sacconi. Rome: Edizioni dell’Ateneo, 1988. ———, and John Chadwick. Documents in Mycenaean Greek. Cambridge: Cambridge University Press, 1956. Watson, James D. The Double Helix: A Personal Account of the Discovery of the Structure of DNA. London: Weidenfeld and Nicolson, 1997. Wilkins, Maurice. The Third Man of the Double Helix: The Autobiography of Maurice Wilkins. Oxford: Oxford University Press, 2003.

122

cha p t er si x

The Sources of Modern Engineering Innovation David P. Billington and David P. Billington Jr.

In recent years, leaders in the advanced democratic countries have expressed concern over the innovative capabilities of their nations.1 Modern societies need to meet environmental and natural resource challenges that are common to the world as a whole, and individual nations need as fully as possible to employ their people and to remain militarily and economically viable in a competitive world. Meeting these needs requires innovation and creativity of a high order. An important question, though, is what constitutes productive innovation. Ideas that make money in a market economy do not all benefit society equally, and some may be detrimental. Many productive ideas are also incremental in nature. Modern societies need to innovate in deeper ways, and calls to renew national innovative capacities make certain assumptions about how to renew and sustain this deeper ability. It is widely believed that universities and other settings where basic scientific research takes place are now the source of deeper innovative ideas. Public and private investments to modernize industry and infrastructure, along with a population better trained in mathematics, science, and technical skills, are seen as also necessary, but basic science is the key. In 1945 President Roosevelt’s chief adviser on science, Vannevar Bush, argued in an influential report that basic or undirected scientific research—­that is, inquiry for its own sake—was the stimulus to major 123

david p. billington and dav id p. billington jr.

engineering innovation up to that time.2 His report became the argument for establishing the National Science Foundation in 1950, and his thesis underpins the idea today that basic scientific research is the key to economic growth. However, given the economic performance of advanced societies in the last half century, despite research budgets that have been greater than at any time in history, the premise that basic research propels economic growth deserves closer examination. Three questions need to be asked: 1. Did the engines of economic growth in the late nineteenth and early twentieth centuries arise from basic science? 2. How did basic science contribute after 1945? 3. Will investments in basic research and in science and mathematics education bring deeper kinds of technical innovation in the future? *** The United States underwent an industrial revolution in the late nineteenth and early twentieth centuries that allowed most of its population to live in cities and then in suburbs. Dramatic advances in food supply, the availability of fresh water and sanitation, the generation and use of inanimate energy, and manufacturing, transportation, housing, and education all raised the standard of living and made America the world’s leading economy. These changes benefited from certain national advantages, such as abundant natural resources and a society that encouraged private innovation. However, the transformation of American life was primarily the work of engineering that embodied radically new technical ideas, in which the contribution of science came after the breakthrough, not before it.

Electric Power The modern electric power grid can be said to have begun in 1878, when Thomas Edison conceived the idea of a network to generate and 124

th e s ources of modern engineering innovation

distribute electricity to power indoor incandescent lamps. Two years earlier he had secured private funds for a laboratory in Menlo Park, New Jersey, on the strength of his work as a telegraph engineer. The funding allowed him the freedom to think in new ways about how to use electricity for light and power. But his work was not undirected in the modern meaning of basic research; his support came from bankers who expected a return on their investment within a few short years.3 Earlier in the nineteenth century, scientists in Europe and the United States had explored the phenomena of electricity. To discover the principles of an electric circuit (voltage, current, and resistance in a circuit, and the basic laws relating them) actually required electrical engineering to come before the science. But the discoverers of these principles are considered scientists because they were interested in knowledge for its own sake and not for any practical purpose. Edison made use of these findings to design useful engineering.4 However, the idea that he simply applied earlier discoveries to a practical end deeply mischaracterizes what he did. In fact, Edison had to challenge and overturn the scientific judgment of his time. Leading scientists and engineers argued in the late 1870s that a network of light and power such as Edison proposed could not work. Their argument was that to achieve a maximum transfer of energy in an electric circuit, resistances inside and outside the power source would have to be equal. From this assumption, it could be shown mathematically that the light in the lamps would diminish (depending on the circuit arrangement) by the square or the cube of the number of lamps added to the circuit. Edison proved this argument false as a basis for engineering by designing an efficient system that used an electric generator with low internal resistance and incandescent lamps with filaments of high resistance.5 Rivals had experimented with incandescent light using bulbs with low resistance, and several inventors in the 1870s developed new bulbs of this kind, notably Sir Joseph Swan in England. But Edison knew that to deliver the electric current necessary to power low-­resistance 125

david p. billington and dav id p. billington jr.

bulbs from a distance would have required uneconomically heavy copper wirelines. Edison’s development of a high-­resistance lamp greatly reduced the current and thus the amount of copper wireline needed. His rivals approached incandescent electric light as an isolated laboratory exercise, in the manner of a scientific experiment rather than an economical innovation.6 Edison employed a mathematical physicist, Francis Upton, to perform more complex calculations. Upton was initially skeptical of Edison’s system but later he had the candor to admit that he had been wrong: I cannot imagine why I could not see the elementary facts in 1878 and 1879 more clearly than I did. I came to Mr. Edison a trained man, a postgraduate of Princeton; with a year’s experience in Helmholtz’s laboratory; with a working knowledge of calculus and a mathematical turn of mind. Yet my eyes were blind. . . . 7 Upton, and later trained minds in electrical engineering, proved more adept at solving the narrower and more sophisticated problems of making electric power more efficient. What made Edison a deeper innovator was his ability to perceive the “elementary facts” that mattered at the inception of the new industry. Edison probably could not have obtained funding had he been required at the time to pass review by established peer groups in science or engineering. However, the image of Edison as merely an applied scientist persists, as does the opposite image of him as an unsystematic tinkerer. (He did rely on trial and error to find a filament for his lightbulb, but this search was to supply a part of a highly systematic vision.) Many today regard his Menlo Park laboratory as the prototype of a highly creative team-­research environment. In fact, Edison’s lab was not team-­oriented in the modern sense because Edison was the source of its major insights, not the group. 126

th e s ources of modern engineering innovation

Edison failed to see the advantages of alternating current over his system, which used direct current, and modern civilization owes its electricity supply as much to George Westinghouse, Charles Steinmetz, Nikola Tesla, and other engineers who made alternating current practical. In this later work, formally trained scientists and engineers employed in more specialized tasks played a vital role. The great private industries of the twentieth century created laboratories to employ such people, bringing further advances such as the tungsten filament that replaced Edison’s carbon one.8 But the work of these laboratories was not as radical as Edison’s initial vision. To realize his system, Edison made use of modern science but had to possess an independent insight as an engineer to overcome scientific opposition to his ideas.

Internal Combustion The early automobile owed nothing to basic science. The nineteenth-­ century internal combustion engines of Etienne Lenoir and Nikolaus Otto and the early gasoline cars of Karl Benz, Gottlieb Daimler, and René Panhard and Emile Levassor in Europe were the work of engineers. The seminal figure Nikolaus Otto invented the modern four-­stroke engine cycle but did so without a knowledge of the thermodynamics involved.9 The auto industry rose to a dominant position in the American economy as a result of Henry Ford’s 1908 vision of a revolutionary car, the Model T Ford, and his engineers then perfected a manufacturing system, the moving assembly line using standardized parts, that enabled him to mass-­produce the Model T and reduce the price. Hundreds of thousands of Americans in town and country took to the road as a result, and by the mid-­1920s, when consumers finally began to want more variety in cars, America had come to depend on motor vehicles.10 Ford had little education, but his assembly line is often cited as an example of “scientific management,” a term popularized by Frederick Winslow Taylor in a 1911 book.11 Taylor advocated time and motion 127

david p. billington and dav id p. billington jr.

studies to make workers more productive, with the aim of making existing production systems more efficient. If Ford had followed Taylor, he would have tried to achieve marginal efficiencies in the earlier method of auto assembly, in which cars never moved until they were finished by workers who moved from car to car to perform particular tasks. Instead, Ford placed workers at different points on an assembly line and moved the cars to them. He timed his assembly-­line workers in order to manufacture motor vehicles in a radically new way.12 In 1913 a chemist with Standard Oil of Indiana, William Burton, patented a new process that increased from 10 to 20 percent the amount of gasoline that could be refined from a barrel of crude oil, and in the 1930s Eugene Houdry patented a process that raised this amount to 40 percent. This work required a knowledge of chemistry but involved chemical engineering, not any new advances in science.13 In the 1930s, Chrysler introduced the streamlining of closed-­body cars, after conducting research on vehicle aerodynamics that reflected a more scientific approach to engineering design.14 Improvements since then have turned the automobile into a machine of increasing performance and comfort, but the automobile today is mostly still a work of (very sophisticated) engineering. The other great innovation to rely on internal combustion, the airplane, resulted from a race between the U.S. federal government and two bicycle makers in Ohio, in which a well-­funded scientific approach failed and an engineering one on a shoestring proved successful. The federal effort to invent the airplane has been all but forgotten but was led by a distinguished astrophysicist, Samuel P. Langley, who headed the Smithsonian Institution in Washington, DC, from 1887 to 1906. Langley privately spent several years trying to fly unpiloted model airplanes, using small steam engines for propulsion, before he succeeded finally in 1896. Believing that he had proved the concept of powered flight, Langley saw no need to do further work, but two years later, with the Spanish-­American War under way, the U.S. Army gave him $50,000 (over $1 million in 2012 dollars) to build a piloted airplane. 128

th e s ources of modern engineering innovation

A model plane relied on its passive stability to stay aloft, and Langley designed his piloted airplane to be a scaled-­up version of a model, with the addition of a gasoline engine but only limited manual controls. Instead of testing the design first as a glider, Langley concentrated on perfecting its parts and making them pass exacting laboratory tests. In two flight attempts at the end of 1903, the full-­sized Langley airplane was unable to fly.15 Neither Wilbur nor Orville Wright was a high-­school graduate, and they financed their research from their meager income making bicycles. From their experience in cycling, the Wrights realized that an airplane would be highly unstable in flight, and in the years 1899–1902, the brothers designed a glider in which the pilot could maneuver in three dimensions by pulling wires to bend the rear wing edges like flaps. During months of slack demand for bicycles, the Wrights conducted full-­scale tests on the sand dunes at Kitty Hawk, North Carolina. After failures that they corrected with the help of homemade testing equipment, they achieved in 1902 an efficient glider. Then they designed and added a gasoline engine and propellers. Returning to Kitty Hawk, on December 17, 1903, the Wrights conducted the first steady-­level flight of a powered airplane.16 Langley and the Wright brothers relied on the research of Sir George Cayley, who had proposed the basic cross-­wing configuration of the modern airplane in 1799 and who had identified the forces of lift and drag that would affect flight. But advances in theoretical aerodynamics over the century that followed were as useless to Langley as they were to the Wrights.17 Langley failed, though, not because he lacked science, but because he did not think as a good engineer. He worked out his ideas in theory and perfected the details of a design that was flawed overall. The Wrights tested their overall design from the beginning and worked on more detailed problems later. Their research with a soapbox wind-­tunnel received sophisticated engineering analysis in the 1980s and was found to be brilliantly efficient.18 After the Wrights proved their flyer, later engineers made vitally 129

david p. billington and dav id p. billington jr.

important improvements to aircraft design, such as replacing wing bending with rigid wing flaps. After 1903, theoretical aerodynamics finally made useful advances as well. The U.S. government set up new research facilities in the 1920s and 1930s that showed the value of streamlining, and engineers in private industry soon designed successful streamlined airplanes, such as the Douglas DC-­3. However, the next great advance in aviation, the jet engine, was pioneered by a British engineer, Frank (later Sir Frank) Whittle. The discovery essential to space travel, that rocket thrust was possible in a vacuum, was a result of engineering research by a physicist, Robert Goddard, in the United States.19 *** During the Second World War, the role of scientists changed. Physicists took a leading role in the development of nuclear weapons, in new uses of radio waves, and in other advances vital to the war effort.20 The need for stronger defenses in the Cold War after 1945 brought greater public funding of higher education, especially for science and engineering, in the United States. Research performed by universities also expanded with federal funding.21 The thesis that science discovered new ideas and engineering applied them came to be widely accepted as an explanation of how federal funding for research would sustain the innovation that America needed. The dominant industries after the war continued to be motor vehicles and the associated steel and oil industries, the aerospace industry associated now mostly with national defense, and electrical and electronic goods and electric power. These industries suffered in the late twentieth century, though, except for electronics, which underwent explosive growth. Invented in 1958–1959, the integrated circuit or microchip incorporated the transistor conceived a decade earlier and achieved phenomenal increases in working capacity by the end of the twentieth century. As a result of the advance in microchip design, the electronic computer went from a scientific instrument to the engine 130

th e s ources of modern engineering innovation

of a new economy. Any general view of the role of science and basic research after 1945 must therefore explain how the transistor and the microchip came about. Modern electronics began with Edison’s discovery of an effect that later engineers identified as the ability of electrons to flow through a vacuum. In the triode, invented in 1906, an electron flow went through a small electrified grid in a vacuum tube, amplifying the flow. Triode amplifiers and other improvements to electronic circuit design made long-­distance telephony and the transmission and reception of radio (and later television) signals practical. Triodes could also work as fast switches, and early computers after the Second World War used them for this purpose. High rates of burnout made the tubes unreliable in large assemblies, though, and their heat and bulk also limited their use. The demand for more compact and reliable electronic equipment in the postwar era made urgent the need for better ways to amplify and switch electricity.22 By the 1930s, advances in quantum theory gave scientists a better understanding of how electric charge carried through solid metals called semiconductors, which could conduct or impede the flow of charge. Executives at the Bell Telephone Laboratories believed that such metals might be made to perform the functions of vacuum tubes, and wartime researchers found ways to purify semiconductors and then add impurities to control their conductivity more precisely. These impurities either created excess electrons in the semiconductor or created “holes” inside the metal by reducing the number of electrons. Either the free electrons or the holes could carry electric charge.23 As the war came to an end in 1945, William Shockley, a physicist returning to Bell Labs from war work, began experiments to test the possibility of amplification through a semiconductor. Shockley positioned a positively charged metal plate close to a sheet of silicon, a semiconductor, that had an excess of (negatively charged) electrons. Science suggested to Shockley that the plate would attract electrons from inside the silicon and that these would amplify an electric current 131

david p. billington and dav id p. billington jr.

going through the silicon surface. The aim of the experiment was not scientific knowledge for its own sake, though, but to demonstrate the principle of how a solid metal might replace the vacuum tube as an amplifier. In his experiments, to his surprise, Shockley found the amplification he sought to be negligible.24 To investigate what had gone wrong, Shockley handed the problem to two other physicists at Bell Labs, John Bardeen, a theorist, and Walter Brattain, an experimentalist. Researchers at the Labs agreed that two semiconductors, germanium or silicon, were the best metals to use if amplification through a solid metal were possible. But Bardeen and Brattain decided that they needed to know more about the natural properties of the metals to understand why Shockley’s experiments did not work. Over the years 1946–1947, with Brattain’s help, Bardeen finally realized what had happened. There were energy states on the semiconductor surface that trapped the electrons and prevented an amplified current from getting out. But holes also migrated close to the surface, below the electrons. By placing a positive charge in contact with the surface, the electrons attracted to the point-­contact increased the number of holes underneath, and these could amplify a current that flowed out. Experiments at the end of 1947 (using germanium instead of silicon) demonstrated the effect, and the device received the name “transistor.”25 The transistor therefore emerged from a decision to investigate the natural properties of metals, a clear instance of basic scientific research. However, this study only occurred because there was an engineering objective, a better amplifier; the work was not truly undirected research in the sense that Vannevar Bush had urged. The transistor would have been impossible without prior advances in physics and in the engineering of materials, but Bardeen and Brattain could not simply apply quantum physics, because in 1945 the theory did not explain how to prevent electrons going through solid material from becoming trapped on the surface. Instead of simply applying science as Shockley had tried to do, and as the notion of innovation as applied science would 132

th e s ources of modern engineering innovation

require, Bardeen investigated the science more deeply, acting as a pure scientist rather than an applied one. However, he would not have done so without a prior engineering image in his mind of what he was trying to achieve.26 Shockley soon created a more efficient version of the transistor, and manufacturers in the early 1950s improved it further.27 Transistors made of germanium and then silicon eventually replaced vacuum tubes for most electronic needs, not only as amplifiers but also as switches. As transistors proved their value, manufacturers tried to use more of them in confined spaces by making circuitry smaller. Transistors and other circuit elements still had to be wired together by hand, though, and it became clear in the mid-­1950s that a practical limit would soon be reached to circuit miniaturization. A solution to this problem, the integrated circuit or microchip, was the insight of two engineers, Jack Kilby and Robert Noyce. Kilby had earned bachelor’s and master’s degrees in electrical engineering after the war and Noyce had received a doctorate in physics, and both began their careers in the early 1950s with private firms that made electronic devices. In May 1958 Kilby moved to a new firm, Texas Instruments in Dallas, Texas, and Noyce and several engineers organized a new firm of their own, Fairchild Semiconductor, near Stanford University in Palo Alto, California, the year before.28 In his 2000 Nobel Prize lecture, Kilby described what happened after he arrived in Texas: “When I started at TI in May 1958, I had no vacation coming. So I worked through a period when about 90 percent of the workforce took what we called ‘mass vacation.’ I was left with my thoughts and imagination.”29 A solution to the problem of miniaturization then dawned on him. Kilby knew that either of the materials used to make transistors, germanium or silicon, could also be used to make the other components of a circuit, such as resistors and capacitors. Neither was ideal for every component, but using one material would remove the need for separate materials. Kilby successfully tested a prototype in which all of the circuit components were 133

david p. billington and dav id p. billington jr.

made of germanium, although he used wires to connect them. He also made prototypes out of silicon.30 Six months after Kilby’s insight, Noyce came to the same insight from a different direction. One of his colleagues, Jean Hoerni, had found an efficient way to lay down metallic leads (the planar process), and Noyce realized that it would be simpler if all of the circuit components interconnected in this way could be made of one material.31 After several years of litigation, Texas Instruments and Fairchild Semiconductor agreed to cross-­license their patents. The new microchip could be printed by machine, removing the barrier to miniaturization. No new science pointed to the integrated circuit: it was an engineering insight prompted by an engineering problem.32 The microchip nearly failed to find a market because private electronics manufacturers preferred different materials for different circuit components. The three armed services had research programs of their own to solve the problem of miniaturization and showed no interest until it became clear, a few years later, that their own programs had led nowhere. Fortunately, the National Aeronautics and Space Administration needed a compact on-­board computer to guide spacecraft and bought microchips in large numbers. The U.S. Air Force also soon realized that microchips could be useful to guide ballistic missiles.33 In 1968, Noyce and several colleagues founded a new company, Intel, where in 1971 Ted Hoff and other engineers invented a general-­purpose microchip, the microprocessor. The new device made possible the compact personal computer that Steve Jobs and Steve Wozniak of Apple Computer made commercially successful in the late 1970s.34 *** In the twentieth century, formal training in science and engineering became essential for most innovators, and science made important contributions in all areas of engineering. However, basic science was not the driver of major innovation predicted in 1945.35 Project Hindsight, a retrospective study of U.S. defense research, found almost no 134

th e s ources of modern engineering innovation

role for pure science or undirected research in generating militarily significant innovations in the period from 1945 to 1965; instead, engineering needs overwhelmingly defined the research and focused related scientific problems that needed to be solved. This pattern appears to have continued in defense research since then.36 In the civilian economy, the transistor has been claimed as an outgrowth of basic science, yet in this case a straightforward application of science did not happen; an engineering need defined the problem that science eventually did help to solve. The microchip was an engineering insight, even though its innovators needed a grasp of modern physics to achieve it. Confusion exists in the public mind over the distinction between science and engineering. The core activity in science, and the skill in which professional scientists are trained, is the discovery of facts that exist in nature. The core activity in engineering, and the skill in which professional engineers are trained, is the design of things that do not naturally exist. The insight required by each activity cannot derive from the other. Scientists and engineers benefit from, and often perform, each other’s work; but when engineers study natural phenomena, they are doing science, and when scientists engage in design, they are working as engineers. The relationship between science and engineering is more complex and interesting than the linear model expressed in the phrase “Science discovers and engineering applies.” Deeper innovation is harder to sustain if society has a mistaken view of the relationship. Modern societies now expect private and government laboratories, universities in their research function, and research parks that bring universities and industries together to be the sources of future innovation. The model is Silicon Valley, the area around Stanford University in California where high-­technology firms have clustered since the 1950s and 1960s. The potential of this area to incubate new industry was the vision of an individual, Frederick Terman (1901–1982), the dean of engineering and then provost of Stanford, who saw the potential of electronics to be a leading economic sector.37 However, the growth that Terman attracted to the Stanford area largely depended on funding for 135

david p. billington and dav id p. billington jr.

national defense, and the failure of Silicon Valley and similar regions to offset the declining sectors of the American economy in the late twentieth century suggests that expectations of the model have been too high. Where society looks for radical insight may not be as important as whether society in a broader sense values such insight, not just for the money it can earn, but for the challenge to conventional thinking that it requires. This is a matter of education. America and other advanced nations are trying to renew their innovative capacities by providing a workforce with generic technical skills and by training a smaller number of people in higher-­order forms of these generic skills. Engineering education imparts a body of standardized knowledge beginning with science and teaches how to solve problems that are in some underlying sense familiar. General education of undergraduate and secondary school students in mathematics and science does the same in a more basic way. Standardized knowledge implies a world in which work is stable and routine, and its emphasis on conventional understanding discourages deeper questioning. For deeper innovation to continue, engineers need to emulate radical innovators, and to do so they need to study them. The study of individual engineers as part of mathematics, science, and engineering education is itself a radical idea, when these disciplines instead stress generic principles and applications. In fact, the study of individual engineers and their greatest works does not require an overhaul of instruction, and a growing body of scholarship and teaching shows how this kind of education can be included in the curriculum.38 When technical education includes not just learning standardized best practices and their application to amenable problems, but also learning about those people whose deeper insights raised standards, then a nation’s innovative capacities are ready to be renewed more deeply, and new challenges may be faced with greater confidence. The strength of our civilization is its ability to overturn conventional thinking from time to time in constructive ways. By studying and emulating those engineers whose insights have overcome critical barriers in the 136

th e s ources of modern engineering innovation

past, sometimes with the help of science but not simply by trying to apply it, future innovators can learn what real breakthroughs require and may find the inspiration to achieve them.

Notes 1. See Rising above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future (Washington, DC: National Academies Press, 2005). 2. Vannevar Bush, Science, The Endless Frontier (Washington, DC: U.S. Government Printing Office, 1945). 3. Paul Israel, Edison: A Life of Invention (New York: John Wiley, 1998). 4. The scientific ideas that Edison worked with may be found in the formula manual that he used, Electrical Tables and Formulas for the Use of Telegraph Inspectors and Operators, compiled by Latimer Clark and Robert Sabine (London: F. N. Spon, 1871). A photocopy of the manual, with marginal notes in Edison’s hand, is in the Papers of Thomas Edison, Rutgers University, New Brunswick, NJ. 5. For the opposition to Edison’s proposed system, see Harold C. Passer, “Electrical Science and the Early Development of the Electrical Manufacturing Industry in the United States,” Annals of Science [London] 7, no. 4 (1951): 382–92. For the flaw in the technical argument against Edison, see David P. Billington and David P. Billington Jr., Power, Speed and Form: Engineers and the Making of the Twentieth Century (Princeton, NJ: Princeton University Press, 2006), 220–23. 6. See George Wise, “Swan’s Way: A Study in Style,” IEEE Spectrum 19, no. 4 (1982): 66–70. 7. For the Upton quote, see Passer, “Electrical Science,” 388. 8. See Leonard S. Reich, The Making of Industrial Research: Science and Business at GE and Bell, 1876–1926 (Cambridge: Cambridge University Press, 1985), 62–128, 142–50. 9. On Nikolaus Otto, see Lynwood Bryant, “The Origin of the Automobile Engine,” Scientific American 216, no. 3 (1967): 102–12. 10. On Henry Ford, see Douglas Brinkley, Wheels for the World: Henry Ford, His Company, and a Century of Progress (New York: Viking, 2003). On the engineering of the Model T, see the Ford Manual: For Owners and Operators of Ford Cars (Detroit: Ford Motor Company, 1914); and on the Ford assembly line, see David Hounshell, From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States (Baltimore, MD: Johns Hopkins University Press, 1984), 217–61. 11. See Frederick Winslow Taylor, The Principles of Scientific Management (New York: Harper and Brothers, 1911).

137

david p. billington and dav id p. billington jr. 12. See Hounshell, From the American System to Mass Production, 249–53. 13. For the work of Burton and Houdry, see John Lawrence Enos, Petroleum Progress and Profits: A History of Process Innovation (Cambridge, MA: MIT Press, 1962), 1–59, 131–62. 14. For the streamlining of cars, see Carl Breer, The Birth of the Chrysler Corporation and Its Engineering Legacy, ed. Anthony J. Yanik (Warrendale, PA: Society of Automotive Engineers, 1995). 15. For Langley’s research, see Samuel P. Langley and Charles M. Manly, Langley Memoir on Mechanical Flight (Washington, DC: Smithsonian Institution, 1911). Langley tried to launch his full-­scale airplane from a houseboat on the Potomac River in October and December 1903, and on both occasions the airplane snagged in the launch mechanism and plunged into the river. But his airplane required extensive structural modifications before a test flight in 1914 was finally successful. See Tom D. Crouch, “The Feud between the Wright Brothers and the Smithsonian,” Invention and Technology 2, no. 3 (1987): 34–46. 16. On the Wright brothers, see Peter L. Jakab, Visions of a Flying Machine: The Wright Brothers and the Process of Invention (Washington, DC: Smithsonian Institution Press, 1990). 17. For Cayley, see John D. Anderson Jr., A History of Aerodynamics and Its Impact on Flying Machines (Cambridge: Cambridge University Press, 1997), 62–80; for the uselessness of aerodynamic theory to airplane design before 1903, see 114–38, 192, and 242–43. 18. See Howard S. Wolko, ed., The Wright Flyer: An Engineering Perspective (Washington, DC: Smithsonian Institution Press, 1987). 19. For later developments in aviation, see Anderson, History of Aerodynamics, 244–46. See also Sir Frank Whittle, “The Birth of the Jet Engine in Britain,” in The Jet Age, ed. Walter J. Boyne and Donald S. Lopez (Washington, DC: National Air and Space Museum, 1979), 3–24; and Robert H. Goddard, “A Method of Reaching Extreme Altitudes,” Nature 105, no. 2652 (1920): 809–­11. 20. On this war work, see Irvin Stewart, Organizing Scientific Research for War: The Administrative History of the Office of Scientific Research and Development (Boston: Little, Brown, 1948), with a foreword by Vannevar Bush. 21. For the growth in federally funded research, see Roger L. Geiger, “Science, Universities, and National Defense, 1945–1970,” in Science after ’40, ed. Arnold Thackray, special issue of Osiris, 2nd series, 7 (1992): 26–48. 22. For an overview of electronics in the vacuum tube era, see Abraham Marcus and William Marcus, Elements of Radio (New York: Prentice-­Hall, 1943). 23. For the origins of the transistor, see Lillian Hoddeson, “The Discovery of the Point-­Contact Transistor,” Historical Studies in the Physical Sciences 12, no. 1 (1981): 41–76; and for semiconductors, G. L. Pearson and W. H. Brattain,

138

th e s ources of modern engineering innovation “History of Semiconductor Research,” Proceedings of the IRE 43, no. 12 (1955): 1794–806. 24. For his contributions to the transistor, see William Shockley, “The Path to the Conception of the Junction Transistor,” IEEE Transactions on Electron Devices, vol. ED-­23, no. 7 (1976): 597–620. 25. For the breakthrough to the transistor, see John Bardeen, “Semiconductor Research Leading to the Point-­Contact Transistor” (Nobel Prize Lecture, 1956), in Nobel Lectures in Physics, vol. 3 (Singapore: World Scientific, 1998), 318–41. For an overview, see also Michael Riordan and Lillian Hoddeson, Crystal Fire: The Invention of the Transistor and the Birth of the Information Age (New York: W. W. Norton, 1997). 26. For the role of science, see M. Gibbons and C. Johnson, “Science, Technology, and the Development of the Transistor,” in Science in Context: Readings in the Sociology of Science, ed. Barry Barnes and David Edge (Cambridge, MA: MIT Press, 1982), 177–85. The engineering image is clear from the title of J. Bardeen and W. H. Brattain, “The Transistor, A Semiconductor Triode,” Physical Review 74, no. 2 (1948): 230–31. 27. For improvements to the transistor, see Ian Ross, “The Invention of the Transistor,” Proceedings of the IEEE 86, no. 1 (1998): 7–28. 28. For the lives and work of Kilby and Noyce, see T. R. Reid, The Chip: How Two Americans Invented the Microchip and Launched a Revolution (New York: Simon and Schuster, 1984). 29. See Jack S. Kilby, “Turning Potential into Realities: The Invention of the Integrated Circuit,” (Nobel Prize Lecture, 2000), in Nobel Lectures: Physics, ed. Gösta Ekspong (Singapore: World Scientific, 2002), 471–85, quote on 479. 30. See Jack S. Kilby, “Invention of the Integrated Circuit,” IEEE Transactions on Electron Devices, vol. ED-23, no. 7 (1976): 648–54. 31. For the work at Fairchild, see Christophe Lécuyer and David C. Brock, Makers of the Microchip: A Documentary History of Fairchild Semiconductor (Cambridge, MA: MIT Press, 2010); the notebook pages showing Noyce’s insight are on 151–55. For his account of the integrated circuit, see Robert N. Noyce, “Microelectronics,” Scientific American 237, no. 3 (1977): 63–69. 32. For the litigation between Texas Instruments and Fairchild, see Reid, Chip, 96–117. For the manufacturing of the integrated circuit, see Bernard T. Murphy, Douglas E. Haggan, and William Troutman, “From Circuit Miniaturization to the Scalable IC,” Proceedings of the IEEE 88, no. 5 (2000): 690–703. 33. For the role of the microchip in the U.S. space program, see Eldon C. Hall, Journey to the Moon: The History of the Apollo Guidance Computer (Reston, VA: American Institute for Aeronautics and Astronautics, 1996). 34. For the microprocessor, see R. N. Noyce and M. E. Hoff, “A History of Microprocessor Development at Intel,” IEEE Micro 1, no. 1 (1981): 8–21. On the

139

david p. billington and dav id p. billington jr. development of the personal computer, see Martin Campbell-­Kelly and William Aspray, Computer: A History of the Information Machine (Cambridge, MA: Westview, 2004), 207–79. 35. For an inquiry into this question, see Donald E. Stokes, Pasteur’s Quadrant: Basic Science and Technological Innovation (Washington, DC: Brookings Institution Press, 1997). 36. For Project Hindsight, see Col. Raymond S. Isenson, Project Hindsight: Final Report (Washington, DC: Office of the Director of Defense Research and Engineering, 1969). For the engineering-­directedness of advanced projects research in defense, see also William B. Bonvillian, “Power Play,” in American Interest 2, no. 2 (2006): 39–48. 37. See C. Stewart Gillmor, Fred Terman at Stanford: Building a Discipline, A University, and Silicon Valley (Stanford, CA: Stanford University Press, 2004). 38. Two courses taught by the senior author at Princeton University since 1974 have developed this kind of undergraduate teaching. The concepts may be found in David P. Billington, The Tower and the Bridge: The New Art of Structural Engineering (Princeton, NJ: Princeton University Press, 1985); David P. Billington, The Innovators: The Engineering Pioneers Who Made America Modern (New York: John Wiley, 1996); and Billington and Billington, Power, Speed and Form. Additional books are forthcoming. With modification, much of this material could also be taught at the secondary level. See David P. Billington Jr., “Engineering in the Modern World,” World History Bulletin 24, no. 2 (2008): 22–24.

Bibliography Anderson, John D., Jr. A History of Aerodynamics and Its Impact on Flying Machines. Cambridge: Cambridge University Press, 1997. Bardeen, John. “Semiconductor Research Leading to the Point-­Contact Transistor” (Nobel Prize Lecture, 1956). In Nobel Lectures in Physics, vol. 3, 318–41. Singapore: World Scientific, 1998. —­—­—­, and W. H. Brattain. “The Transistor, A Semiconductor Triode.” Physical Review 74, no. 2 (1948): 230–31. Billington, David P. The Innovators: The Engineering Pioneers Who Made America Modern. New York: John Wiley, 1996. —­—­—­. The Tower and the Bridge: The New Art of Structural Engineering. Princeton, NJ: Princeton University Press, 1985. —­—­—­, and David P. Billington Jr. Power, Speed and Form: Engineers and the Making of the Twentieth Century. Princeton, NJ: Princeton University Press, 2006. Billington, David P., Jr. “Engineering in the Modern World.” World History Bulletin 24, no. 2 (2008): 22–24.

140

th e s ources of modern engineering innovation Bonvillian, William B. “Power Play.” American Interest 2, no. 2 (2006): 39–48. Breer, Carl. The Birth of the Chrysler Corporation and Its Engineering Legacy, edited by Anthony J. Yanik. Warrendale, PA: Society of Automotive Engineers, 1995. Brinkley, Douglas. Wheels for the World: Henry Ford, His Company, and a Century of Progress. New York: Viking, 2003. Bryant, Lynwood. “The Origin of the Automobile Engine.” Scientific American 216, no. 3 (1967): 102–12. Bush, Vannevar. Science, The Endless Frontier. Washington, DC: U.S. Government Printing Office, 1945. Campbell-­Kelly, Martin, and William Aspray. Computer: A History of the Information Machine. Cambridge, MA: Westview, 2004. Crouch, Tom D. “The Feud between the Wright Brothers and the Smithsonian.” Invention and Technology 2, no. 3 (1987): 34–46. Electrical Tables and Formulas for the Use of Telegraph Inspectors and Operators. Compiled by Latimer Clark and Robert Sabine. London: F. N. Spon, 1871. Enos, John Lawrence. Petroleum Progress and Profits: A History of Process Innovation. Cambridge, MA: MIT Press, 1962. Ford Manual: For Owners and Operators of Ford Cars. Detroit: Ford Motor Company, 1914. Geiger, Roger L. “Science, Universities, and National Defense, 1945–1970.” In Science after ’40, edited by Arnold Thackray, special issue of Osiris, 2nd series, 7 (1992): 26–48. Gibbons, M., and C. Johnson. “Science, Technology, and the Development of the Transistor.” In Science in Context: Readings in the Sociology of Science, edited by Barry Barnes and David Edge, 177–85. Cambridge, MA: MIT Press, 1982. Gillmor, C. Stewart. Fred Terman at Stanford: Building a Discipline, A University, and Silicon Valley. Stanford, CA: Stanford University Press, 2004. Goddard, Robert H. “A Method of Reaching Extreme Altitudes.” Nature 105, no. 2652 (1920): 809–11. Hall, Eldon C. Journey to the Moon: The History of the Apollo Guidance Computer. Reston, VA: American Institute for Aeronautics and Astronautics, 1996. Hoddeson, Lillian. “The Discovery of the Point-­Contact Transistor.” Historical Studies in the Physical Sciences 12, no. 1 (1981): 41–76. Hounshell, David. From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States. Baltimore, MD: Johns Hopkins University Press, 1984. Isenson, Raymond S. Project Hindsight: Final Report. Washington, DC: Office of the Director of Defense Research and Engineering, 1969. Israel, Paul. Edison: A Life of Invention. New York: John Wiley, 1998. Jakab, Peter L. Visions of a Flying Machine: The Wright Brothers and the Process of Invention. Washington, DC: Smithsonian Institution Press, 1990.

141

david p. billington and dav id p. billington jr. Kilby, Jack S. “Invention of the Integrated Circuit.” IEEE Transactions on Electron Devices, vol. ED-23, no. 7 (1976): 648–54. —­—­—­. “Turning Potential into Realities: The Invention of the Integrated Circuit” (Nobel Prize Lecture, 2000). In Nobel Lectures: Physics, edited by Gösta Ekspong, 471–85. Singapore: World Scientific, 2002. Langley, Samuel P., and Charles M. Manly. Langley Memoir on Mechanical Flight. Washington, DC: Smithsonian Institution, 1911. Lécuyer, Christophe, and David C. Brock. Makers of the Microchip: A Documentary History of Fairchild Semiconductor. Cambridge, MA: MIT Press, 2010. Marcus, Abraham, and William Marcus. Elements of Radio. New York: Prentice-­ Hall, 1943. Murphy, Bernard T., Douglas E. Haggan, and William Troutman. “From Circuit Miniaturization to the Scalable IC.” Proceedings of the IEEE 88, no. 5 (2000): 690–703. Noyce, Robert N. “Microelectronics.” Scientific American 237, no. 3 (1977): 63–69. —­—­—­, and M. E. Hoff. “A History of Microprocessor Development at Intel.” IEEE Micro 1, no. 1 (1981): 8–21. Passer, Harold C. “Electrical Science and the Early Development of the Electrical Manufacturing Industry in the United States.” Annals of Science [London] 7, no. 4 (1951): 382–92. Pearson, G. L., and W. H. Brattain. “History of Semiconductor Research.” Proceedings of the IRE 43, no. 12 (1955): 1794–806. Reich, Leonard S. The Making of Industrial Research: Science and Business at GE and Bell, 1876–1926. Cambridge: Cambridge University Press, 1985. Reid, T. R. The Chip: How Two Americans Invented the Microchip and Launched a Revolution. New York: Simon and Schuster, 1984. Riordan, Michael, and Lillian Hoddeson. Crystal Fire: The Invention of the Transistor and the Birth of the Information Age. New York: W. W. Norton, 1997. Rising above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future. Washington, DC: National Academies Press, 2005. Ross, Ian. “The Invention of the Transistor.” Proceedings of the IEEE 86, no. 1 (1998): 7–28. Shockley, William. “The Path to the Conception of the Junction Transistor.” IEEE Transactions on Electron Devices, vol. ED-­23, no. 7 (1976): 597–620. Stewart, Irvin. Organizing Scientific Research for War: The Administrative History of the Office of Scientific Research and Development. Boston: Little, Brown, 1948. (Foreword by Vannevar Bush.) Stokes, Donald E. Pasteur’s Quadrant: Basic Science and Technological Innovation. Washington, DC: Brookings Institution Press, 1997. Taylor, Frederick Winslow. The Principles of Scientific Management. New York: Harper and Brothers, 1911. Whittle, Sir Frank. “The Birth of the Jet Engine in Britain.” In The Jet Age, edited

142

th e s ources of modern engineering innovation by Walter J. Boyne and Donald S. Lopez, 3–24. Washington, DC: National Air and Space Museum, 1979. Wise, George. “Swan’s Way: A Study in Style.” IEEE Spectrum 19, no. 4 (1982): 66–70. Wolko, Howard S., ed. The Wright Flyer: An Engineering Perspective. Washington, DC: Smithsonian Institution Press, 1987.

143

cha p t er sev e n

Technically Creative Environments Susan Hackwood

In economically advanced countries, about 80 percent of GDP growth is due to the introduction of new technologies.1 So it is not surprising that the topic of innovation is actively researched2 and discussed in economics, politics, and the media.3 Indeed, it has become an immensely popular topic in many science and technology and policy arenas. Innovation in this context is inextricably linked to creativity. As creativity is a vast field, I confine my contribution to a specific aspect of it, and one in which I have experience: how to foster environments favorable to technical creativity. To begin with, it is useful to distinguish technical creativity from creativity in general. Creativity has been researched for more than half a century,4 and its relationship to human accomplishment has been documented and discussed quite extensively.5 Much controversy remains around the concept of creativity, and I do not go into the issues here; but I do use facts and theories, which to the best of my knowledge are on solid grounds among psychologists, sociologists, and other researchers in the field.6 As an operational definition let us take the following: “Creativity is the ability to bring about the new and valuable”—­“valuable” meaning “true, good, beautiful, or useful.” The distinguishing characteristic of “technical” creativity versus creativity in general is that the “valuable” 145

susan hackwo od

part brought about by technical creativity is not the true, good, or beautiful but rather “the useful.” Scientific creativity produces new truths; artistic creativity produces new beauty; social, philosophical, and other forms of creativity produce new goodness. But technical creativity may produce none of the above. A new type of deadly weapon is not a new truth, a new beauty, or a new goodness, but it is a new useful thing. Thus, technical creativity is morally neutral. It can produce a new medicine or a new gas chamber. Moral neutrality is a second characteristic specific to technical creativity. Charles Murray’s Human Accomplishment 5 emphasizes the role of what he calls “transcendental values” (the true, the good, the beautiful) in driving the creative mind toward accomplishment. But these drivers do not do for technical creativity, which is oriented toward the useful and the morally neutral. The key driver of technical creativity is, in fact, the desire to achieve power over nature, for whatever moral or immoral reasons one might have. It is the desire, as A. N. Whitehead put it, to “live, to live well, to live better.” And it can be either Promethean or compassionate: it can be the drive to gain power over nature in order to make a better world of one’s own choice, or in order to make a better world out of love for someone else. Technical creativity, besides being distinct from other creativities in its goal and motivators, is distinguished also by a necessary requirement: high intelligence of a specific kind—­basically a high IQ with a high quantitative and spatial component. It is well known that creativity in general is very weakly correlated with IQ;7 thus, a high quantitative and spatial IQ is definitely not a sufficient condition for technical creativity, but it seems to be a necessary condition. What else do we need to know about creativity? Creativity is thought to have a genetic component, which, unlike IQ, is emergenic and so it does not run in families.8 Identical twins correlate only about 0.5 in various creativity tests; nonidentical twins have basically zero correlation.9 So, a person is probably born with some potential for creativ-

146

technically creative env ironments

ity, although how much this weighs against environmental influences remains very vague. As to the role of the environment, various popular techniques purported to enhance creativity (brainstorming, lateral thinking, prize competitions, etc.) are unproven at best, and remain very controversial.4 Better established is the fact that potential creativity can be stunted—­a fact that is hardly surprising. We also need to know something about the personality traits of creative individuals, but more on this below. For now we have the essential facts on creativity in general, insofar as they are relevant to technical creativity and to our goal of understanding environments favorable to technical creativity. There are basically three problems with nurturing technical creativity: 1. How does a potentially creative child turn out to be an actually creative eighteen-­year-old? 2. How does a creative eighteen-­year-old acquire the technical expertise necessary for creative achievement? 3. How does a group of technically expert and creative individuals work together to achieve higher creativity? All three are important, and I hope to discuss issues 1 and 2 in future publications, but here the focus is on 3—­technically expert and creative individuals working together—­because the education process that determines how a creative adult is produced is discussed far more often. Also—­although this is perhaps not a good goal in the long term—­we can effectively bypass questions 1 and 2 by considering the universal phenomenon of brain mobility, the so-­called brain drain (or brain gain, depending on which side you stand). This phenomenon is universal throughout history. Brains have always moved according to economic, social, intellectual, and political forces. They moved to Athens, Rome, and Alexandria in the ancient world; to Baghdad, Bologna, and Paris in the middle ages; to Florence, Holland, and London in the Renaissance; to France, England,

147

susan hackwo od

and Germany in the nineteenth and early twentieth centuries; to the United States ever since (but maybe not for much longer unless current policies are drastically changed). The United States, like her historical predecessors, has been a tremendous gainer from this mobility. Statistics to prove the point abound. For example, a third of U.S. Nobel Prize winners are foreign born,10 rising to 44 percent if you count Nobel Prize winners who are the children of immigrants. Members of the National Academy of Sciences are about 25 percent foreign born.11 In Eastern Europe in the late 1880s, Samuel Goldwyn (MGM, Paramount), Carl Laemmle (Universal Studios), William Fox (Fox Films), Louis B. Mayer (MGM), Jack Warner (Warner Bros. Studio), and Adolph Zukor (Paramount Pictures) all lived within two hundred miles of Warsaw. When they moved to Los Angeles, a new industry was born. It is instructive to look in more detail at the size of the brain-­gain phenomenon in the United States. Not as many brains may be willing to come and stay in the United States as might have been in the past. Indeed the numbers may shrink. But the effect is still enormous, and with very little effort on the part of American society, it could remain so. Consider this. In 1972 the U.S. brain gain was at a low point. America was not very attractive to professional immigrants. The Apollo program had ended; many aerospace engineers were unemployed; there was an energy crisis and poor economic conditions; there was the Vietnam War and poor social conditions. There were cuts to defense, cuts to research and development, and poor university conditions. Nevertheless, in that year 11,323 scientists and engineers and 7,143 physicians and surgeons immigrated into the United States—­18,466 individuals in total.12 I argue below that this is a large number. But note that it is dwarfed by more recent immigration. A few years ago, the United States used to give about 200,000 H-1B visas per year (195,000 in 2002–2003), and after September 11, 2001, the number, although reduced, was still about 85,000 per year (65,000 with an additional 20,000 for students 148

technically creative env ironments

who had received at least a master’s degree from a U.S. institution). We should add to those numbers the O-­1 visa. For several years now, the United States has given about 7,000 of the latter visas per year.13 Over half of these visa applicants have advanced degrees, and about 90 percent of these are in science or technology;14 most go on to earn a higher salary than their U.S.-­born counterparts.15 In any case, let us take as a reference point the low visa numbers of 1972, and compare them with what the United States can produce, so to speak, in-­house. How many technically creative individuals does the United States produce per year? We do not know, but we can make some reasonable estimates. The United States produces about 4 million babies per year. Those with an IQ above the threshold for, say, attaining a PhD in a technical field are about 2 percent, or 80,000. Of these people my guess is that no more than 15 percent have the motivation and opportunity to obtain a PhD or the equivalent technical expertise necessary to make a creative contribution in a technical field; so we are talking about 12,000 individuals. This is a rough but reasonable estimate. Consider that the number of PhDs granted by U.S. institutions in technical subjects to U.S.-­born students is of the same order, 15,000.14 Out of these, a certain percentage will turn out to be creative. Let us be very generous and say 50 percent. These are rough estimates but the key point is that the United States is unlikely, even with the best educational methods for promoting technical creativity, ever to produce more than about 7,500 technically creative individuals per year. I do not know how to estimate the number of creative individuals in the pool of H1B applicants, but it is very probable that out of 200,000, or even the current 65,000 individuals mentioned above, we can find many more than the maximum number that we can produce in-­house (about 7,500). And even out of the 18,466 individuals who immigrated in the very unattractive year 1972, we could probably have found a number of technically creative individuals comparable to the maximum number we could ever produce in-­house. Thus, importing technically creative human beings is 149

susan hackwo od

a blessing to the United States, since the country could not, regardless of social improvements, produce them in-­house. Indeed, the importance of maintaining and increasing the number of immigrant technical specialists can hardly be overemphasized. Currently, whether the immigration of technical specialists will be fostered or stunted is a matter of debate.16 And, even if policies favorable to immigrating technical specialists prevail, the competition from emerging countries, such as China and India, makes the outcome uncertain; in any case, such an outcome will be critical to the future prosperity of the United States. Some U.S. technical companies are lobbying actively for increasing the number of H1B visas. Their argument is based on their immediate need for technical personnel. This reason is valid, of course, but much more is at stake. The country’s technical creativity is fundamentally based on the immigration of creative individuals. Other countries are beginning to take the importance of brain mobility very seriously. The cornerstone of the NAFTA Plus free-­trade agreement reached by Canada and France is the free mobility of skilled labor across their borders.17 This does not let us off the hook in our need to address the dire problems in our education system, and the accessibility of education to our increasingly diverse population (another paper could easily be devoted to this topic). But it provides an argument as to why the United States should not focus on questions 1 and 2 posed earlier, why we should focus instead on how to encourage technically expert and creative individuals to work together. Here I concentrate not so much on what enhances the creativity of groups of technically creative people, but on what kills it, since we have much more reliable knowledge of the factors that stunt creativity than of those that encourage or enhance it. To understand what kills technical creativity, let us return briefly to creativity in general. I am simplifying, but from what we know, creative achievement is made possible, basically, by four elements: two personality traits18 and two abilities, which are independent of having creative personality traits.5 150

technically creative env ironments

These latter two abilities are: a. The ability to master the knowledge and skills by which the creative accomplishment can be carried out. You need to learn the vocabulary and grammar of a language in order to become a novelist, for example. b. The ability to sustain an intense focused effort toward a specific goal. The two creative personality traits are: c. Uninhibited ideational fluency. The creative person is prolific in generating ideas of some type (musical, visual, verbal, etc.) and, importantly, is not inhibited in the scope and range of these ideas. Many noncreative individuals are prolific. But their ideas are restricted to some range. Limits to their ideational fluency are imposed by some inhibiting factor—­for example, social conformism. d. Autonomous personal vision. The creative person is guided by an internal vision and not by society’s values, opinions, and approval; this trait is measured, for example, by the Sociotropy-­Autonomy Scale.19 Blocking any of these four elements stunts creativity. Since we are focusing on creative environments, we can assume that the individuals in these groups have acquired, and will maintain, abilities a and b. So I focus here on c (uninhibited ideational fluency) and d (autonomous personal vision). Ideational fluency is likely an inborn trait,4 so let us examine the factors that inhibit it. It is likely that the tendency to look inside is also inborn,4 so let us examine the autonomy of the inner vision. For an environment to remain creative, creative individuals must not be inhibited in their ideational fluency and must not be stunted in their tendency to follow their autonomous inner vision. With these criteria in mind I would argue that, broadly speaking, the presence of the humanities and social sciences departments in the university is not necessarily an asset to technical creativity. I am not saying that these departments divert money and resources that should 151

susan hackwo od

be devoted to science and engineering (a commonly held opinion in technical academic departments). My view is that they currently, rather than broadening the mind, often produce individuals inhibited by political correctness, who are discouraged from relying on their autonomous vision by relativism and a largely nihilistic philosophy that engenders no passion for truth and places no value on values, including radically unconventional ideas and individual achievement. Paradoxically, the very disciplines meant to provide breadth can actually foster limits and inhibitions. So, for the purpose of this discussion, I exclude the general university and study only the technical university and the research laboratory. As a reference point, I have in mind somewhere like the old, prebreakup Bell Laboratories, where I happened to start my professional career in 1979. In reviewing the literature on creative accomplishment, I have not noticed, strangely enough, any particular emphasis on something that is intuitively accepted as quite important for it: the role of the leaders. Pericles, Lorenzo de’ Medici, Frederick II of Prussia, and Peter I of Russia were evidently crucial for the existence of creative environments. In the modern United States, what role do the leaders of research centers (by which I mean, in general, environments with a group of technically creative individuals) play in fostering or stunting the creativity of the group they lead? I will argue that the leaders must be capable of appreciating and understanding the creative personality. In practice, this is rarely the case, but it was true of the old Bell Labs. In the research area, its managers were selected from creative researchers. After Bell Labs came under pressure to produce quick and relevant results, the managers ceased to be selected from the pool of creative researchers. Bell’s efficiency went up fast—­but its creativity died even faster. There are many aspects of this question to investigate. One possible study could investigate the relationship between technically creative output and type of manager. The chancellor or president of an academic institution is usually an individual of high academic standing. But they 152

technically creative env ironments

in turn take some direction from a higher board. So, for example, is there a relationship between the number of patents per faculty member and the qualifications of the trustees of a university? Figure 2. Yearly Ratio of Number of Technical (or Academic) to Nontechnical (Nonacademic) Trustees (or Regents) (Regents/Trustees [TP+A]/[PL+PS+NTP]*)

*Formula for the ratio equation: (Technical Professional + Academic) / (Nontechnical Professional + Public Servant + Political Leader) [rounded to the nearest hundredth]

Figure 3. Yearly Number of Patents per Technical Faculty

153

susan hackwo od

The following analysis is admittedly not very scientific. I took a look at three universities in California and plotted the “technical expertise” of the trustees versus the patents per faculty member produced every year (see Figures 2 and 3). In the first figure the ratio of technical to nontechnical regents (or trustees) is plotted. Figure 3 plots the yearly number of patents per (technical) faculty. It is hard to see any significance in comparing the University of California (UC) with Stanford University. But the prominence of the California Institute of Technology (Caltech) versus UC and Stanford is clear. Of course, this difference may turn out to be due to many other factors. But it may be worth asking whether the ability to appreciate how technical creativity occurs is a quality that is important for trustees to have. Does it make any difference to a university’s technical creativity if a trustee is a successful real estate salesperson, a political person, or a Nobel Prize winner in physics? I do not know the answer to this question, but it may be worthwhile to find out. The graphs clearly illustrate that Caltech has both more technical and academic trustees and a tendency to produce more patents per faculty member, which may suggest some correlation between a university’s governing board and the university’s creative output. The trends at Stanford and in the UC system, however, suggest other contributing factors, such as research and development (R&D) spending, that help determine a university’s creative output. Though Stanford has had a good many more technical and academic trustees than the UC system has had in their regents, for many years the UC system produced more patents per faculty member. Let us consider another source of potential weakness in technological leadership. A subtly insidious type of leader cannot understand how creativity occurs, and such leaders are pervasive in what are supposed to be creative research environments. I use the initialism IBNC for this type. It stands for “intelligent but not creative.” Unfortunately, the IBNC is probably the most common type among university professors—­and

154

technically creative env ironments

this is true of the humanities, too: scholars, not creative writers, populate English literature departments. IBNCs have abilities a and b above—­that is the ability to master technique and the ability to persist with effort and focus. They also often have part of ability c: they can be prolific. But the IBNC’s ideational faculty is inhibited, restricted by social conformism, approval, and values—­and the academic promotion and reward system. Finally IBNCs, being highly sociotropic, have no ability d: no autonomous personal vision to drive them and to sustain them against the opposition of the prevalent cultural paradigms. They are motivated by external factors, susceptible to group pressure, afraid to experiment with new things, and repelled by risk and ambiguity. IBNCs are fundamentally incapable of moving against the accepted vision/opinion of the group in relation to which they define themselves. The successful creative research environment is characterized by its power to prevent IBNCs from becoming leaders or from dominating the group by intimidation and other social means. This is not easy for the simple reason that the filters (notably schools) that select for abilities a and b tend to select many IBNCs, who therefore are inevitably found within any potentially creative research environment. The task is to isolate, restrict, and if possible remove IBNCs from the group. To achieve this result, the leadership of the group needs to be in the hands of creative individuals. Once power passes to the IBNCs, the process is irreversible, and the creative group ceases to be such. This decay into mediocrity may be delayed but it is almost inevitable. (It seems analogous to the inevitable increase in entropy.) In practice, leadership by creative people is very difficult to achieve because technically creative people generally are not attracted to management, which is a social task. Thus, much support (administrative and such) should be provided to creative leaders, so that their task can be done part-­time, rotated, shared, and so on. This is what is ideally done in the best research universities and laboratories.

155

susan hackwo od

But such an ideal is forever threatened with takeover by the IBNCs, especially in a time of scarce resources. An obvious example of IBNC takeover is the dominance of funding agency bureaucrats when money is scarce. They end up controlling much of the research activity by inevitably fostering group projects (always in culturally sanctioned areas) and megaprojects (always in interdisciplinary and sanctioned areas). Such control kills the autonomy of the creative person’s vision and inhibits ideational fluency. Yet, even with the best methods of guaranteeing that the group will be led by a technically creative person and that IBNCs will not come to dominate the group, an intrinsic paradox exists in the notion of a creative group. This lies in the fact that a key personality trait of the creative person is autonomy, and rejection of what is currently accepted by the group. In fact, any method of organizing a creative group will fail if it amounts to an attempt at forming a herd of mavericks. On the other hand, collaboration among creative people is, in fact, possible and effective—­but if, and only if, it is from the bottom up, as a free association based on ideas of common interest. Hence the old rule of thumb of the most creative research centers: “Hire the best and let them free.” The major reason for the failure of potentially creative environments is that while lip service is paid to this rule, in practice the mavericks are herded under the dominance of an IBNC. The old Bell Labs really ended not with the breakup of AT&T in 1984 but with the handing over of the research leadership to businesspeople in the 1990s. So, let us assume that we have managed not to destroy a creative environment by making sure that we have not let IBNCs become managers or dominant and that we have the creative people be free, including being free to choose with whom to interact or not interact. Three other factors can still hinder the creative output of the group. The first is lack of resources. Good resources and the best tools are the most important factors in helping the creative person be more productive, as is obvious. Farmers produce much more feedstock now than they did two hundred years ago not because they are smarter or more 156

technically creative env ironments

creative but because they have better tools. Moreover, one of the basic reasons for brain migration to the United States has always been the better facilities our universities have to offer. It may be worth investigating the pattern of brain migration in relation to the research resources available. Singapore seems to be betting on resources to attract brains to its Biopolis. The second factor also concerns resources. Resources should not be made conditional on output. A creative environment does not eliminate all forms of competition for resources, but it does make a basic level of resources available unconditionally. There is a danger of encouraging the formation of dead wood in the group. But it is not clear if this effect, among highly creative people, is more serious than the inhibition of radical innovation caused by their constant competition for resources and prizes, which are generally handed over by noncreative people. This subject is also ripe for further study. The third factor is a poor external environment, meaning poor quality of life in the community: housing, safety, health care, schools, transportation, culture, entertainment, and so on. Creative people cannot devote their energy to creative accomplishment if their personal life is made difficult or unpleasant for them and for their families. Investigating to what extent the quality of life of a community in which research centers and universities are located correlates with the creative output of the institution is another important field of research. In summary, I have briefly considered how to foster technically creative environments. Such environments obviously rely on the prior existence of technically creative people. I have sidestepped the problem of how to produce such people and bypassed the issue by relying on brain mobility. I have argued that the number of technically creative people that the United States imports (even at the current low rate) far exceeds the maximum number that it could produce in-­house even with the best social and educational methods. We should continue to foster brain mobility to our country in all possible ways. It is the best investment we can make. Finally, I have taken a laissez-­faire approach 157

susan hackwo od

to fostering a technically creative environment by listing five rather obvious rules for not killing the creativity of a group of technically creative researchers: 1. Hire the best and let them free. 2. Do not let IBNCs (intelligent but not creative persons) become managers, leaders, or even dominant in the group. 3. Provide the best research tools. 4. Do not make access to basic research resources depend on constant, fierce competition where noncreative agents pick winners. 5. Move the group to a location where the quality of personal life is high. These rules were pretty much in effect in the heyday of the old Bell Labs. Brain mobility imported these rules from Bell Labs to California, and Silicon Valley was born. Often, what appear to be the most original ideas have already been invented.

Notes 1. Robert Solow, “The Last 50 Years in Growth Theory and the Next 10,” Oxford Review of Economic Policy 23, no. 1 (2007): 3–14. See also http://magazine. amstat.org/blog/2011/03/01/econgrowthmar11/. 2. John Kao, Innovation Nation (New York: Free Press, 2007). 3. Sir Ken Robinson makes an entertaining and profoundly moving case at http:// www.ted.com/index.php/talks/ken_robinson_says_schools_kill_creativity. html for creating an education system that nurtures (rather than undermines) creativity. 4. Robert J. Sternberg, ed., Handbook of Creativity (Cambridge: Cambridge University Press, 1999). See also James C. Kaufman and Robert J. Sternberg, eds., The Cambridge Handbook of Creativity (Cambridge: Cambridge University Press, 2010). 5. Charles Murray, Human Accomplishment (New York: HarperCollins, 2003). 6. J. C. Kaufman and R. J. Sternberg, eds., The International Handbook of Creativity (Cambridge: Cambridge University Press, 2006). 7. J. P. Guilford, The Nature of Human Intelligence (New York: McGraw-­Hill, 1967). 8. D. T. Lykken, “Research with Twins: The Concept of Emergenesis,” Society for Psychophysical Research 19 (1981): 361–72.

158

technically creative env ironments 9. Colin Martindale, “Biological Bases of Creativity,” in Sternberg, ed., Handbook of Creativity. 10. http://nobelprize.org/. 11. William A. Wulf, “The Importance of Foreign-­Born Scientists and Engineers to the Security of the United States,” statement to the U.S. Congress, September 15, 2005, http://www7.nationalacademies.org/ocga/testimony/ Importance_of_Foreign_Scientists_and_Engineers_to_US.asp. 12. J. G. Whelan, “Brain Drain: A Study of the Persistent Issue of International Scientific Mobility,” Committee on Foreign Affairs, U.S. House of Representatives (Washington, DC: U.S. Government Printing Office, 1974). 13. Even though the H-­1B visa is a nonimmigrant visa, it is one of the few visa categories recognized as dual intent, meaning that an H-­1B holder can have legal immigration intent (i.e., to apply for and obtain the U.S. Permanent Residence Card—­the “green card”) while still a holder of the visa—­see the November 2006 statement by the U.S. Citizenship and Immigration Service, “Characteristics of Specialty Occupation Workers (H1-B),” for the financial years 2004 and 2005. A June 2010 report by Robert D. Atkinson for the Information Technology and Innovation Foundation, “H1-B Visa Workers: Lower-­Wage Substitute, or Higher Wage Complement,” http://www.itif.org/publications/h-­1b-­visa-­workers-­lower-­wage-­substitute-­or-­higher-­wage-­complement, noted that U.S. policymakers have long debated the merits of high-­skilled immigration in general and the H1-B visa program in particular. Defenders of the H1-B visa program maintain the value of the H1-B program in assisting U.S. technology firms to fill critical science and technical positions in the United States and enabling U.S. companies to be globally competitive. Detractors, mainly unions representing technical workers, argue that the H-­1B program is used by companies as a means to lower the wages they pay to domestic workers and that the added H1-B visa positions reduce the availability of U.S. jobs. This argument has recently been refuted by S. Mithas and H. C. Lucas Jr., “Are Foreign IT Workers Cheaper? US Visa Policies and Compensation of Information Technology Professionals,” Management Science 56, no. 5 (2010): 745–65. 14. See the 2008 and 2010 Science and Engineering Indicators published by the National Science Board at http://www.nsf.gov/statistics/seind08/ and http:// www.nsf.gov/statistics/seind10/. The Bureau of Citizenship and Immigration Services indicate a continuing increase in foreign graduate students from April 2008 to April 2009, with foreign enrollment in science and engineering fields growing by 8 percent. See http://www.nsf.gov/statistics/seind10/c2/c2s3 .htm/. 15. Temporary residents earned half or more of all U.S. doctorates in engineering, mathematics, computer sciences, physics, and economics in 2005. See the science and engineering indicators published by the National Science Board,

159

susan hackwo od chap. 2, http://www.nsf.gov/statistics/seind08/c2/c2h.htm#c2sh4. The rise in transnational education has not had much impact on foreign student flows according to Hans de Wit, The Dynamics of International Student Circulation in a Global Context (Rotterdam: Sense Publishers, 2008). The influence of the worldwide economic and monetary crises beginning in 2008 on international flows of students in the future is uncertain—­see http://www.nsf.gov/statistics /seind10. 16. See the Sanders Amendment 1223 to S.1348: Comprehensive Immigration Reform Act of 2007. 17. “Nafta-­Plus,” Wall Street Journal, October 20, 2008, http://online.wsj.com/ article/SB122445840565148489.html?mod=googlenews_wsj. In 2008 Prime Minister Stephen Harper of Canada and President Nicolas Sarkozy of France signed an agreement to begin negotiations for a free-­trade pact between Canada and the European Union with two-­way trade estimated to increase by 22.9 percent by 2014. A key element of the proposed free-­trade pact allowed a free labor market so that skilled workers could move easily back and forth across the Atlantic. Another major provision was to open up government contracts with the private sector to bidding from European companies. As of November 2010 the proposed Canada/EU free-­trade pact had not been implemented. 18. G. J. Feist, “The Influence of Personality on Artistic and Scientific Creativity,” in Sternberg, ed., Handbook of Creativity. 19. Peter J. Bieling, Aaron T. Beck, and Gregory K. Brown, “The Sociotropy-­ Autonomy Scale: Structure and Implications,” Cognitive Therapy and Research 24, no. 6 (2000): 763–80.

Bibliography Bieling, Peter J., Aaron T. Beck, and Gregory K. Brown. “The Sociotropy-­Autonomy Scale: Structure and Implications.” Cognitive Therapy and Research 24, no. 6 (2000): 763–80. De Wit, Hans. The Dynamics of International Student Circulation in a Global Context. Rotterdam: Sense Publishers, 2008. Guilford, J. P. The Nature of Human Intelligence. New York: McGraw-­Hill, 1967. Kao, John. Innovation Nation. New York: Free Press, 2007. Kaufman, James C., and Robert J. Sternberg, eds. The Cambridge Handbook of Creativity. Cambridge: Cambridge University Press, 2010. —­—­—­. The International Handbook of Creativity. Cambridge: Cambridge University Press, 2006. Lykken, D.T. “Research with Twins: The Concept of Emergenesis.” Society for Psychophysical Research 19 (1981): 361–72. Mithas, S., and H. C. Lucas Jr. “Are Foreign IT Workers Cheaper? US Visa Policies

160

technically creative env ironments and Compensation of Information Technology Professionals.” Management Science 56, no. 5 (2010): 745–65. Murray, Charles. Human Accomplishment. New York: HarperCollins, 2003. Solow, Robert. “The Last 50 Years in Growth Theory and the Next 10.” Oxford Review of Economic Policy 23, no. 1 (2007): 3–14. Sternberg, Robert J., ed., Handbook of Creativity. Cambridge: Cambridge University Press, 1999. Whelan, J. G. “Brain Drain: A Study of the Persistent Issue of International Scientific Mobility.” Committee on Foreign Affairs, U.S. House of Representatives. Washington, DC: U.S. Government Printing Office, 1974.

161

cha p t er eig ht

Entrepreneurial Creativity Timothy F. Bresnahan

World economic growth, particularly continued U.S. economic growth, depends on founding new markets and new industries. Scientific and technical invention, no matter how brilliant and creative, is only one step in the founding of high-­tech industries. Entrepreneurial creativity is also needed.1 Such creativity is typically linked to scientific and technical advances, and is sometimes displayed by the same people and firms that make key technical advances.2 In a market economy, however, entrepreneurial creativity is often widely dispersed, and the openness of the market economy is as important to it as the openness of science is to creative outsiders. Entrepreneurial creativity locates and exploits overlaps between what is technically feasible and what will create value for society.3 This is the key step in the founding of new technology-­based industries, and it is often very difficult. The list of feasible scientific and technical advances is a long one. So, too, is the list of new products, new markets, and new industries that will create value, either by serving existing needs with fewer resources or by generating new ways of making people better off. Economic growth since the first industrial revolution has taken both forms of value creation: the provision of food, clothing, and shelter requires vastly less resources today than it did a few centuries ago; and the invention of new and better goods and services 163

timothy bresnahan

lets us live much better than our ancestors, for whom subsistence and warmth were critical.4 Finding the overlaps between technical opportunity and value creation is one of the most demanding conceptual tasks in creating technical advances in the modern economy, and it depends critically on entrepreneurial creativity. Seeing new overlaps is difficult because knowledge is dispersed widely in the economy. The most important economic growth driver of the rich economies in recent years involves the use of computer systems in large organizations, in markets (electronic commerce), and in the creation of online entertainment media such as social networks. Computer systems draw on new science and technology to a great degree, of course. However, understanding computer technology deeply does not endow computer specialists with deep knowledge of markets, entertainment, or the delicate arts of social communication. That knowledge is, typically, held by others. More generally, when markets and industries do not yet exist, there is no good reason for the same person to have knowledge of both technical feasibility and value creation. *** One source of entrepreneurial creativity is individuals who see the overlaps. These people are the most obvious “entrepreneurs” in society, particularly if they found new firms. For modern technical change, however, as we shall see, thinking about only the lone, heroic engineer is an important mistake, and a further mistake to think only about that person’s garage. Much entrepreneurial creativity occurs in complex market processes involving a number of creative steps, and some occurs in large, complex organizations too big to fit in a garage. This is not to say that the founding of new firms (even in garages) is unimportant, but rather that it is only one aspect of the creative process that leads to new markets and industries. Entrepreneurial creativity is related to, but distinct from, scientific or engineering creativity. Indeed, some very important instances of entrepreneurial creativity are dismissed by technologists as uncreative—­ 164

entrepreneurial creativit y

“mere marketing.” Entrepreneurial creativity lies in seeing the overlaps between technical feasibility and value creation. Entrepreneurial implementation lies in building the firms, markets, or industries that exploit a technological opportunity to create the value. In many ways, this market focus distinguishes entrepreneurial creativity. The new product or process innovation that serves an important need may appear quite mundane, but if it was not foreseen, it is creative. Indeed, a good working definition of practical creativity ought to emphasize the transition from a state in which something was unforeseen to a state in which it is compelling. Many innovations seem obvious with hindsight because they are compelling to their users.

An Economic Definition of “Technical Progress” with Implications for Creativity The economic definition of “technical progress” is broader than the word “technical” suggests. Hence my focus on creativity goes beyond the technical. The definition of technical progress is relative to the “production set” in the space of all the inputs (labor, capital, energy, clean air) and outputs (houses, iPods, music, etc.) we care about. Any increase in knowledge that expands the production set to permit the better satisfaction of human desires with the same inputs or the equal satisfaction with less input counts as “technical progress.” This is an explicitly consumerist definition, so product quality improvements—­ seen from the buyer’s perspective—­count as technical progress. The sense of “inputs” is inclusive. For instance, they include the quality of the earth’s atmosphere, so that knowledge which would let us create the same goods and services while putting less carbon into the atmosphere counts as technical progress if atmospheric carbon presents a long-­term problem. This example also shows that the value of different kinds of technical progress is contingent on the availability of different inputs. Assuming that the climate scientists are right, and that the carbon-­carrying capacity of the atmosphere is much less than 165

timothy bresnahan

was once thought, the atmosphere is now a scarce input, and technical change that permits fulfilling human needs while putting less carbon into the atmosphere is newly valuable. The several industrial revolutions, which focused largely on the manipulation of physical objects in manufacturing and mining, or the mechanization of farming, which also focuses on physical manipulation, are easily perceived as technical change. Less obvious is that the installation of a corporate enterprise’s new resource-­planning system, which focuses on the work of a large white-­collar bureaucracy, is technical change. The economic perspective is useful here. At the time of the first industrial revolution in the eighteenth century, the growth bottleneck facing the economy was the amount of physical goods that could be produced by muscle labor (human or domestic animal). Similarly, at the time of the second industrial revolution in the late nineteenth and early twentieth centuries, the automation of the work of blue-­collar workers relaxed a growth bottleneck and permitted society to have more while working less (and using less of other resources). Today, many more people in the rich economies work in white-­collar bureaucracies than (directly) in manufacturing, mining, or agriculture. If we are to have more output with less input (less labor, less carbon, etc.), one task for technical progress is the automation of white-­collar bureaucracies. Entrepreneurial creativity in the age of automating bureaucratic work has had, and continues to have, a particularly hard task. For a variety of reasons, seeing overlaps between feasible technical improvements and the creation of new economic value in this sphere is extremely difficult. This fact makes supporting entrepreneurial creativity all the more important.

Definitions of “Innovation” and “Invention” Technical progress in the economic sense involves a number of different kinds of creative endeavors.5 Here we focus on the exploitation of science and engineering to drive long-­run economic growth. This 166

entrepreneurial creativit y

process involves three very different creative activities: invention, innovation, and diffusion. Precise definitions of these three activities are the subject of some debate, but the key distinctions are as follows: Invention: The conception of new scientific or engineering ideas. Innovation: The development of new marketable products or new usable processes. Diffusion: The adoption of new products or processes widely in the market. Each of these activities involves creativity; innovation and diffusion involve entrepreneurial creativity.

Invention Invention is what most people have in mind when they think of technical change. It is a varied activity, including basic science, applied science, and engineering. It occurs in a variety of disciplines, or in no discipline; it may draw on knowledge from multiple disciplines; and it is found in academic life and companies. The key point for the purpose of this discussion is that invention is technical in the narrow sense. A closely related idea is the technical knowledge stock of the economy, which is increased by invention. Invention creates new knowledge, which is added to the stock. To understand the relationship between entrepreneurial creativity and invention, I focus on two aspects of the technical knowledge stock—­that is, accumulated inventions. First, not all of the technical knowledge stock of the economy is known to everyone. Much of scientific and engineering knowledge is open, of course. When invention occurs in academic life, or when we academics capture the knowledge of inventors in commercial life (i.e., theory catches up with practice), technical knowledge becomes part of the knowledge stock of the economy. But this does not mean that everyone knows it; often, only specialists do. Invention in commercial life often remains private, becoming part of the knowledge stock of a 167

timothy bresnahan

company (and in that limited sense, of the economy).6 The key point for understanding its relationship with entrepreneurial creativity is that accumulated scientific and engineering knowledge is distributed in society. The more open the access to scientific and engineering knowledge is, the easier for the distribution of knowledge to change and for entrepreneurial creativity to be sparked; but this is not the same as saying it is infinitely easy.7 Second, a distinction must be drawn between scientific and engineering inventions and new inventions that are technically feasible. Although the stock of scientific and engineering knowledge is largely codified and maintained in excellent order, knowledge of which potential new inventions are technically feasible is distributed among technologists in a very different way. Some potential new gains in scientific and engineering knowledge call for tremendous creativity. Others are advances that any reasonably well-­trained engineer can see to be technically feasible. Between these extremes lies a great variation in knowledge about potential inventions, and in the degree to which this knowledge is distributed in society. Some potential inventions can be foreseen by any engineer if exactly the right question is asked. The distribution of scientific and engineering knowledge about which inventions are technically feasible in society leads us naturally to entrepreneurship, for it is one of the roles of the science-­or engineering-­ based entrepreneur to know which inventions will be valuable and to inquire, with adequate specificity to permit practical engineering, whether they are technically feasible.

Innovation The key difference between invention and innovation is that innovation is market-­facing. The watchwords of innovation are “marketable” and “usable.” Thus, innovators are typically focused on very different values from inventors, notably on implementation, speed, and cost.

168

entrepreneurial creativit y

To a process innovator, the fundamental question is whether the process works, whereas a product innovator wants to know whether the product will sell. Innovation is not concerned, unless it is compelled to be, with generality, clarity of statement, or even correctness (beyond reasonable empirical assurance that its results are going to work or sell). In the process of innovation, the invention of new technical knowledge is a cost, not a benefit. To be sure, the best way to innovate sometimes involves invention, but new knowledge is not innovation’s goal: new products and processes are. Yet even when no invention is involved, innovation is a creative activity. Innovation, because it uses technology to fulfill an external need, is fundamentally about overlaps. Whether undertaken by large firms or small, new firms or old, innovation involves entrepreneurial creativity.

Diffusion Even after a product or process has been commercialized, users may not adopt it immediately. The diffusion of important new technologies is typically a slow process. Indeed, economic studies that decompose aggregate technical progress into its components put enormous weight on diffusion. What does diffusion have to do with creativity? Sometimes the slow pace arises because users’ adoption of a new technology itself calls for invention or innovation (by the user). For example, the diffusion of computing in commercial environments (accounting systems in one era, electronic commerce in another) is far slower than the diffusion of computing in technical environments (scientific laboratories, factory engineering) because of the organizational innovation and invention needed to make effective use of computing. It is one kind of creativity to invent the computer, another to invent enterprise resource-­planning software, and yet another to create value for a specific firm while install-

169

timothy bresnahan

ing that software. The last class of creators—­those in the individual firms installing the software—­are very important from the perspective of economic growth. The market-­facing work of innovators sometimes speeds up existing diffusion processes by making adoption or adaption of new products and services easier or cheaper. For substantial transformations, however, innovators must trigger new diffusion processes. Long-­term studies tell us that 3 percent a year is a very good rate of technical progress (increase in generalized output per unit of generalized input) for a rich economy.8 One reason this figure has not increased secularly, and may soon be decreasing, is that diffusion of important modern technologies, especially of the business data-­processing technologies that support white-­collar automation, is slow. Invention, innovation, and diffusion are each necessary and complementary for technical progress. The question of which of them is most important is delicate, as it always is in the case of complements. A causal definition of “most important” fails with complements. Take away any one of the three—­invention, innovation, or diffusion—­and the other two are unproductive. That said, one definition of “most important” is that it requires the most resources. Another is what requires the most difficult creative steps to achieve. If creativity is highly rewarded economically, these two definitions coincide.

From the Integrated Circuit to the Personal Computer Several themes of this chapter are illustrated in the series of entrepreneurial inventions and innovations that began with the invention of the integrated circuit (IC) in the 1950s and in due course led to the widespread use of personal computers (PCs) in the 1990s. By examining this series, we see the role of entrepreneurial creativity in the founding of a number of very important industries and markets, the complementary roles of technical creativity in invention and entrepreneurial creativity 170

entrepreneurial creativit y

in innovation, and the nature of the institutions that have supported the entrepreneurial creativity. As is obvious, from the perspective of economic growth, the discovery of the transistor effect in 1947 was one of the most valuable pieces of twentieth-­century science. But much of the economic value derived from practical use of the transistor effect has emerged from the subsequent invention of the IC and the large number of markets and industries that entrepreneurs created in taking advantage of the innovative opportunity. The IC is a very important general-purpose technology (GPT), and has the main technical and market characteristics of a GPT. Different kinds of ICs have been useful in a wide variety of devices. Some of those devices are themselves GPTs, notably the computers and telecommunications equipment underlying advances in information and communications technology. ICs today are found everywhere and are linked to important and valuable innovation. Moreover, the IC has been open to continued rapid technical progress, enabling ever more powerful, cheaper, or less power-­hungry devices, as well as a widening range of devices. Just as ICs have enabled much innovation, the creation of new markets and new industries has provided the funding for round after round of improvements in ICs. Such are the hallmarks of a GPT. A number of other features of the IC are relevant to understanding the role of entrepreneurial creativity in realizing the tremendous gains that have flowed from it. First, ICs are manufactured with substantial scale economies, and these scale economies are dynamic because there is learning by doing in manufacturing. The one-hundred-­millionth unit of any particular IC design is likely to cost far less than the onethousandth unit. In the language of economics, learning by doing means that the marginal cost of manufacturing falls with volume and falls over time. Second, the IC requires complementary innovation and investment to be useful (in this respect it is a “pure” GPT). ICs alone are useless; to be useful, an IC must be designed into an electronic device. Electronic devices, in turn, are often useless without complementary 171

timothy bresnahan

innovation. A computer without applications software, for example, is no more than a “boat anchor,” in the dismissive industry phrase. To show these important results linking entrepreneurial creativity to value creation, let us look at only a subset—­albeit the most valuable subset—­of the innovations and inventions that stemmed from the IC. An important feature of the founding of Silicon Valley in California was the complementarity between the technical invention of the IC in the late 1950s and a number of entirely managerial and commercial innovations. One of these, which was extremely valuable, was the innovation of a pricing model: bottom-­of-­the-­learning-­curve pricing. The combination of technical inventiveness and commercial and managerial innovation around the IC led to a wide range of complementary inventions and innovations. To make the market point we need not follow all of these. Instead, we can once again follow the money through the invention of the microprocessor, the creation of the PC industry, the invention of the spreadsheet and the word processor, and the innovation of the IBM PC. The IBM PC diffused widely into corporate white-­collar work, supporting “individual productivity applications” and creating tremendous economic value. I emphasize this path not because it is the only possible route that could have led us to the highly valuable cluster of markets in the PC industry, but because it shows us the essential role of entrepreneurial creativity and the set of supporting institutions, notably market institutions, that enable it.

The Founding of Silicon Valley The differences between scientific creativity and entrepreneurial creativity—­and their complementarity—­emerge from a famous example of how scientists migrated into entrepreneurship and became technologist-­managers. Many people know of how a brilliant physicist, William Shockley,

172

entrepreneurial creativit y

attracted a number of other brilliant young scientists, including Gordon Moore, Robert Noyce, and Andrew Grove, to his entrepreneurial firm. The creative ideas behind the semiconductor industry at that time were quite new; Shockley’s Nobel Prize was awarded (to him and others) in 1956 “for their researches on semiconductors and their discovery of the transistor effect.” After a dispute, a number of the younger scientists left Shockley Semiconductor in 1957 to found Fairchild Semiconductor inside a large, established, electronics company. Later, they left Fairchild to form a start-­up, Intel, which is with us today.9 Many people also know that an extraordinary number of scientists and engineers learned how to be entrepreneurs at Fairchild Semiconductor. The firms they founded formed the backbone of the Silicon part of Silicon Valley.10 This string of start-­ups—­many founded as spin-­offs from Fairchild—­also led to the formation of the venture capital industry of Silicon Valley, another institution that is still with us today.

Turning Technologists into Technologist-­M anagers It is worth understanding what those scientists and engineers learned at “Fairchild University” and how this knowledge was useful in their entrepreneurial creativity. According to Moore, who went on to head Intel, they learned to be “technologist-­managers.” This change called for a great deal of retraining. First, the would-­be technologist-­manager had to learn to be less interested in the fundamental intellectual concerns of science. As Moore wrote in 2001, [T]he technologist-­manager had to learn to guide innovation with an understanding of both commercial and technical goals. These managers needed first to be scientists with a deep understanding of the subject. But the demands of the firm mean that the generality typical of the university style

173

timothy bresnahan

lab is far too inefficient. These technologist-­managers need to be able to plot the shortest path to workable discovery.11 Second, the technologist-­manager, even in an area as full of scientific and technical promise as the young Silicon Valley, needed to be attuned to labor-­market and product-­market concerns. Famously, the founders of Silicon Valley learned why they had to be effective people managers from watching Shockley do a bad job of that, and they learned how to be effective people managers the way almost everyone does, from experience and practice. Scientists are only very rarely oriented to be people managers in the sense that businesspeople are, and need a great deal of experience to learn the skills. Yet these skills are, as Moore points out, critical to implementation of new innovations.

Product Market Orientation and Seeing the Overlap One of the most difficult tasks for an entrepreneur is seeing the overlap between (1) what is technically possible with a bit more invention, and (2) what demanders in the market will buy. Innovators need to see overlap opportunities, for that tells them which innovation will create economic value. Ideas about the overlap are the essential feature of entrepreneurial knowledge in technology industries. Many scientists and engineers are very well trained in (1), yet have weak skills in (2). Indeed many people choose careers in technical specialties because, at an early age, they realize they dislike thinking about (2) at all. The ability to see the overlap has some of the features of crossing the boundary between scientific disciplines. But a key difference is that the knowledge about demand in most markets, especially demand for new products or processes, is badly codified and not structured. A mind that is good at grasping the physical sciences is not always good at the soft-­studies tasks needed for demand assessment. However—­and this is a crucial point—­a scientist or technologist 174

entrepreneurial creativit y

who knows the limitations of his or her own knowledge can found a market in which demand reveals itself. As we shall see, founding a market itself calls for considerable commercial insight, but it saves the scientist/entrepreneur from a great deal of effort in investigating demand needs, or even of learning who the potential demanders might be.

A Network of Firms and People to Train Scientist-­M anagers Over a long period of time, a network of knowledge sharing arose in Silicon Valley. One of the great benefits of having a large number of entrepreneurial firms with similar interests in the same region was the growth of this network. The knowledge stock of the IC industry, for example, included not only technical knowledge about inventions, but also market knowledge about innovations. Entrepreneurs, not all of them working in the same firm, knew (and know) other people to whom they could turn for critical labor-­market information (“Should I hire Jo as my marketing person? I know you worked with her,” etc.) and product market information (“Which standard will emerge as the market leader?” etc.). This market knowledge is not generally shared across companies in the same way that scientific and technical knowledge is. But market knowledge is no less important to an entrepreneur than scientific and technical knowledge. An entrepreneurial firm typically has important resource constraints, and thus its ability to undertake complex market research may be limited. Cultures able to support entrepreneurship, such as the open flow of scientific information, open systems, and regional clusters, can lower the costs of founding a successful entrepreneurial firm. Although I have picked hardware examples from the earliest days of Silicon Valley to illustrate the importance of management and implementation of a conceptual change away from a scientific to a commercial perspective, and of product and labor-­market knowledge, the story of software is much the same. 175

timothy bresnahan

Commercial Innovation to Encourage More Inventions by Customers Let us return to the specific economics of the IC industry, and the issues raised by and opportunities created through learning by doing—­in particular the innovation of bottom-­of-­the-­learning-­curve pricing. The IC itself was apparently independently invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild (which eventually became Intel). That story is well known. Less well known is that Noyce also innovated bottom-­of-­the-­learning-­curve pricing.12 The problem of selling a new IC is that, at first, before learning takes place, costs are very high. A firm that looks to its accounting system in order to guide pricing will, accordingly, set high prices. Noyce’s insight was, first, that if volume could be generated for an IC, the seller could go down the learning curve and have lower costs and, second, that charging prices consistent with those lower costs from the beginning could induce a demand for volume. Essentially, the new firm lost significant money to begin with, but made it back once the volume had built up.13 Of course, a variant of this economic logic works for any new product with scale economies. What is unusual in the case of scale economies involving learning by doing is that to achieve the prices that will permit large volume demand, the seller must first produce in large volume, which gives the seller a particularly strong incentive to find a way to discover demand. Bottom-­of-­the-­learning-­curve pricing worked out very well for Intel. It was particularly effective in creating demand that the IC seller had never even imagined. This was an innovation, and an entrepreneurial one, for it called for insight not only into the economics of the firm’s costs over time but also into the potential demand for ICs, and specifically into the problem that the potential demand was unknown and unknowable. A substantial advantage of this pricing solution is that it did not require Intel to know the identities of its future volume customers 176

entrepreneurial creativit y

or their planned volumes. This is an important distinction in terms of the amount of entrepreneurial knowledge required. Consider the most common alternative solution to the problem: to seek out large customers. In the early stages of the IC, they were primarily defense/ government customers or large, established electronics firms. Texas Instruments, for example, signed up IBM and many government customers. This solution works if demand is composed of key large customers who can be identified. Bottom-­of-­the-­learning-­curve pricing, in contrast, enables the seller to locate many small customers, who may be previously unknown to the seller. This will be particularly important when customers themselves are entrepreneurial firms that are yet to be founded, or firms that are not yet interested in the new technology (ICs)—­or more generally, when there are important costs, such as search costs, of linking together the new technology with its users or with inventors of complementary technology. A second, related strategy, that of volume discounts, also helps a seller hoping to go down the learning curve avoid the need to know its customers in detail. Intel early on adopted a system of volume discounts in order to give customers, including unknown customers, an incentive to design electronic devices that would sell in volume. This commercial innovation was also suitable at the time of its adoption for the limits of entrepreneurial knowledge. Again, an essential point here is that Intel did not need to know why a particular customer would want volume or even how much that customer might want. The volume discounts set up the possibility of an arms-­length, market, win-­win situation, in which the customer made a large number of devices and Intel sold that customer a large number of IC components. Another very important example of this approach comes under study in a moment, but here let me point out that these business innovations—­bottom-­of-­the-­learning-­curve pricing and volume discounts—­economize on entrepreneurial knowledge. Intel did not need to know which kinds of electronic devices would serve which kinds of demand. The company could leave that very difficult problem 177

timothy bresnahan

to its customers (the manufacturers of electronic devices) and its customers’ customers (users of electronic devices). This market strategy was designed for finding an overlap between technical opportunity, the IC, and value creation, as opposed to a contractual strategy in which buyer and seller agreed up front on large volume.

Recombination This particular pricing structure was supportive of what is known as recombination. Recombination is defined as taking ideas and inputs that already exist and putting them together (possibly with the addition of further invention or innovation) to accomplish something new. Although the commercial use of scientific and engineering knowledge is an extremely varied activity, economists and historians of technical progress noted long ago that most innovation is recombination. For example, Joseph Schumpeter wrote in 1939 that most “innovation combines components in a new way, or that consists in carrying out New Combinations.”14 Recombination can be extremely difficult to foresee, and the searches for partner technologies with which to recombine are notoriously difficult.15 By putting the ICs they were manufacturing out in the market at attractive prices, Intel reduced those difficulties for potential customers—­who flocked to Intel. Noyce’s bottom-­of-­the-­learning-­ curve pricing was an open invitation for customers to recombine ICs with other inventions and innovations to create new marketable products. This was an inspired decision, since without that recombination we would not have a large number of useful IC-­based technologies today. Yes, a pricing scheme can be an “inspired” innovation! More generally, a market strategy encourages recombination by unknown, future inventors and innovators. In a new technology and market area (such as the uses of ICs, which were clearly a broad field, but one as yet entirely unexplored), enabling and encouraging recombination could create a large amount of value. The commercial inno178

entrepreneurial creativit y

vation of a firm making a GPT, of building a market and setting it up to accommodate innovation by new partners, contributes to the environment supporting new entrepreneurial creation.

Recombination and Reuse within the Firm and across Customers With many tens of thousands of different kinds of integrated circuits now in use, the IC is clearly a GPT. But so, too, was the IC when it was first invented. The volume of an IC’s production that was consistent with going down the learning curve and achieving low costs exceeded the volume needed by any particular customer. And if every new customer needed a custom design, the costs of that design would need to be recovered.16 The best economic return and the best value creation would occur if ICs were designed to be general, at least to some degree. Generality/fit tradeoffs began to matter. This called for strategies to reuse the same IC design in multiple customers’ devices. Early on, an important solution arose: programmability. Products like programmable read-­only memory (ROM) and its various improvements and descendants (EPROM, EEPROM, and others) pushed out the envelope of reuse of a single design. As everyone now knows, the most important invention in terms of programmability was the microprocessor. It is worth pointing out that the microprocessor, the “computer on a chip,” was not invented to create the personal computer. It was invented to permit reuse of the same design across multiple customer products, actually digital watches. But the invention of the microprocessor was about to be turned into one of the most valuable pieces of twentieth-­ century technical progress, leading, together with a large amount of entrepreneurial creativity, to the founding of a large number of firms and industries. The generality in the microprocessor did not anticipate, direct, or compel the further market in entrepreneurial creativity. It enabled and permitted it. 179

timothy bresnahan

The Personal Computer: Entrepreneurial Creativity Founds an Industry Around the time of the invention of the microprocessor at Intel, a number of different entities were trying to create a personal computer, that is, a computer that would be used by one person.17 None of these efforts was succeeding commercially (though some were technically impressive). None led to the founding of the PC industry, or to mass markets in computer hardware and software. The problem was the lack of entrepreneurial knowledge, that is, the lack of an overlap between a complete computer system—­hardware, software, applications, and peripherals—­that could be designed and built to make a computer system that a large number of people would buy. In short, entrepreneurial knowledge was scarce and valuable. At this point in the discussion, several of the key features of entrepreneurial creativity came together to ignite a process that created enormous economic value. Soon after their invention in the early 1970s, Intel microprocessors were available with volume discounts and bottom-­of-­the-­learning-­curve pricing. The point of this was, as pointed out earlier, to enable potential customers to invent or innovate in ways that would use a large number of Intel chips in ways that could not be foreseen. While putting the computer on a chip into a computer was a technical advance that now seems obvious, there was nothing obvious ex ante about the commercial innovations that led to the founding of the PC industry. They combined creativity from a large number of sources. Ed Roberts at Micro Instrumentation and Telemetry Systems (MITS) offered the first successful PC kit, the Altair—­the device on the cover of the January 1975 edition of Popular Electronics. The immediate market was the kind of people who read that magazine: technically fluent users who would be called, in the language of the early PC industry, “hobbyists.” There is a heated dispute about who invented the PC,18 which is irrelevant from an economic growth perspective. Roberts’s 180

entrepreneurial creativit y

combination of the invention of a particular PC kit and the innovations associated with pricing and marketing it founded an industry. What were the key creative elements in his introduction of the Altair, and how did they draw on the earlier entrepreneurship at Intel? Roberts was commercially oriented, and he knew how to get the message out to a number of relevant customers; the magazine cover was a coup. Further, because of Intel’s pricing schemes, Roberts was in a position to sell a kit from which one could assemble a working computer for less than the single-­unit price of a microprocessor. He bought the microprocessors in bulk, taking advantage of the volume discount, and thus was able to offer his customers a low price. Roberts knew he would succeed only if he could sell a significant number of kits, but his success built volume for him and, of course, ultimately for Intel to an extraordinary degree, since Intel microprocessors can now be found in hundreds of millions of PCs. In the short run, though, the problem was to finance a volume purchase of microprocessors and to create a mass market. Roberts would later say, “We were lucky to have a banker and a magazine who believed there was a real market.”19 The “real market” was then counted in hundreds of computers, not the later tens of millions. Still, this was the finance and the publicity that ignited the industry. Aspiring entrepreneurs, and anyone who studies entrepreneurial creativity, could learn much from his focus on using a publisher to create a market presence and his willingness to work with any appropriate source of finance, not just venture capital. *** To hammer home one of the analytical points of this chapter, there was much more than one piece of entrepreneurial creativity at work here. The founding of the PC industry turned on not only the creativity of an innovative device supplier, Altair, but also on the creativity of inventive and innovative component suppliers like Intel. One central point of the component suppliers’ innovativeness was to permit, rather than to attempt to direct or to anticipate, invention and innovation by 181

timothy bresnahan

their customers. They went to particular lengths to encourage innovations and inventions that would build large-­volume businesses. The PC business, as we now know it, is a mass-­market volume business. That development required the alignment of entrepreneurial creativity from a number of different inventors and innovators. Many of the key innovations were market-­building. Others were an invitation to create follow-­on innovations and inventions. The nascent PC industry, to a considerable degree, constructed for itself an environment in which many different inventors and innovators, each with some—­but not all—­of the relevant entrepreneurial knowledge, were able to contribute to setting a high rate of technical change and a direction of technical change leading toward economic growth.

Two Senses of Complementarity Many writers emphasize a different linkage across entrepreneurs, that is, the role of one entrepreneurial firm in spinning out others. Many, many firms—­celebrated as “Fairchildren”—­spun off from Fairchild. From those in turn, and from other early firms, a large number of other start-­ups were spun off. The spin-­off mechanism is an important source of entrepreneurial firms, of course, and the management literature is right to emphasize it (though it sensibly emphasizes spin-­offs from established firms). As long as one is focused on the origins of firms, this perspective is important. But if one is interested instead in the origins of markets and industries, as I am here, one needs a different notion of complementarity, more related to open systems and markets. I emphasize a separate linking mechanism between different firms in which one inventive or innovative firm sets up market relations to encourage the entrepreneurial creation of other firms and technologies. The linkage of a series of complementary inventions and innovations can find an overlap between technical opportunity and value creation enjoyed by no one individual. That is the power of entrepreneurial creativity in founding whole 182

entrepreneurial creativit y

industries, and it centers not on the individual entrepreneur’s effort to solve all problems but on leaving open opportunities for invention and innovation by others. Both critical IC firms like Intel and critical PC firms, like MITS, followed this path and thus launched an industry.

The Transformation Wrought by Entrepreneurial Software Any computer, including the PC, is only as valuable as the software available for it. To underscore this point, consider the uses of early PCs. The creation of the Altair quickly led to the founding of entrepreneurial programmer-­tools firms, of which some, such as Microsoft (then Micro-­Soft) are still with us. Early software categories for the PC—­also to a large degree the product of entrepreneurial creativity—­ primarily served hobbyist or other technical demand categories. Here we can see one of the great strengths of entrepreneurial creativity in a new industry with limited barriers to entry (because of open systems). A wide variety of software products came into existence to serve the existing market of hobbyists and the like. The hobbyist market looked large to the entrepreneurs who flooded into the PC industry in the late 1970s. But it was vastly smaller than the PC’s eventual market. Before long, the PC would be a near-­universal tool in white-­collar work and serve many other markets as well. Entrepreneurial creativity in software was critical to the transition. It was thus essential that the leading firms in the early PC industry also followed open-­systems strategies. There were a number of leading firms in the early days, but by 1977 two clear leaders had emerged, Digital Research Inc. (DRI), which supplied the CP/M operating system running on a large number of computers, and Apple, which supplied both the Apple II and its operating system. Both companies encouraged a wide number of software vendors to write for their computers, pushing information out to software vendors about systems calls. The result was an explosion of software, including software from creators 183

timothy bresnahan

whom the industry’s founders did not know. The system, by which entrepreneurial creativity enabled further entrepreneurial creativity, was thriving. This system involved recombination both in an intellectual sense (the reuse of existing ideas in new domains) and in a market sense: the extension of the PC itself to wider and wider domains of use. Many of the firms took existing technologies and reworked them to be effective components of a small computer. For example, the firm then called Micro-­Soft rewrote the existing Basic language to work on a very small computer. This called for new invention in the form of “tight code” and the addition of features that made the inventors of Basic very angry but which sold a great deal of software. Yet it also clearly recombined existing knowledge. Many other entrepreneurial PC firms of this era made similar recombinant inventions, involving engineering and entrepreneurial creativity. Some version of the relevant ideas existed for large computer systems; the entrepreneurs needed to create versions that would work in the PC market, which was a different environment technically (hence the tight code) and was also a radically different market (a PC needed to cost two orders of magnitude less than a big business data-processing machine). Of the important innovations and inventions in software for the early PC, two stand out as transformative from an economic perspective. The invention of the spreadsheet and the word processor opened up new markets for themselves, of course, but also for the PC, which white-­collar workers could now use. This was a very important step in locating the overlap between the technical opportunity represented by the PC and value creation. Tens of millions of PCs were eventually sold for white-­collar workers’ use. It may seem incredible to modern observers that a great deal of entrepreneurial creativity was required to see the PC as a machine someone would use at his or her office desk. But the fact is that the earliest participants in the computing industry did not see the important of white-­collar work in the demand for PCs. Nevertheless, they built 184

entrepreneurial creativit y

their new industry in an open-­systems way so that others could find the overlap. Just as bottom-­of-­the-­learning-­curve pricing was an invitation to recombinant technical change, so were open systems. Let me add one last step here: the innovation of the IBM PC. The IBM PC was not much of an invention, in the narrow and technical sense of that term, as it was basically a CP/M machine, albeit a very good one. The IBM PC had some user interface improvements over the average-­practice CP/M machines of the time, such as function keys. But PC-­DOS, the operating system that ran on the IBM PC, and the ancestor of modern Windows, was a clone of CP/M. As an innovation, however, the IBM PC was extremely important. IBM’s marketing legitimized the PC as a machine that corporations could use. The market for PCs in white-­collar automation exploded after its introduction.

Some Lessons Perhaps the most important lesson of this discussion is that the scope of important innovation is not limited to technical advances. A second really important lesson has to do with recombination and the accumulation of knowledge. Institutions can be set up as parts of markets to encourage entrepreneurial creativity. As a result, almost like a miracle, a large number of uncoordinated entrepreneurs, working in markets, can invent something of great value for users whom they do not know. The invention of the IC might appear to be, at first glance, an example of the linear model whereby science leads to engineering, which in turn leads to commercialization. Shockley was certainly a brilliant scientist whose scientific work formed the foundation of much that came later. However, other factors were—­as they usually are—­an essential feature of the success of the IC in creating large economic value. The discovery of the overlap between demand needs and technological opportunity was a joint effort, distributed over a large number of entrepreneurs and established firms. This discovery created a revenue flow that permitted IC firms to make the increasingly expensive investments needed for 185

timothy bresnahan

further advances. The software entrepreneurship caused the fundamental advances in the IC, as much as the reverse. Indeed, the founding of the PC industry was very far from following the linear model. It started in a market process whereby entrepreneurial creativity saw opportunities to use existing technical progress to serve demand opportunities; it grew through a market process where other entrepreneurial firms saw new demand opportunities and created powerful profit opportunities for the invention of new and better technologies. Today we are living in another era of valuable and diverse entrepreneurial creativity in markets. The founding of new industries online, on mobile devices, and in social networks is the result of a market process of entrepreneurial creativity. At the moment, much of the innovation in these areas creates economic value through new forms of entertainment and play. What remains to be seen is the scope of entrepreneurial creativity in these new industries. Will new innovation repurpose these technologies away from play and toward the automation of white-­collar work, as it did in the PC industry? If so, the economic value arising from this new round of entrepreneurial creativity could contribute a significant fraction of economic growth in this century. The key to such a step will be, as it was earlier, the variety of potential innovators and the open-­market conditions conducive to widespread entrepreneurial creativity.

Notes 1. In emphasizing the entrepreneurial creation of new markets and industries, I follow F. A. Hayek, “Economics and Knowledge,” Economica IV, new series, no. 13 (1937): 33–54, and the economic analysis of entrepreneurship. There is a related literature on the founding of firms, which is the alternative definition of entrepreneurship. I emphasize the economic over the managerial definition because of its focus on the long-­term growth of the whole economy. 2. Of course, entrepreneurs do many other things in the economy. These include driving the small business sector in low-­tech parts of the economy, creating alternatives to the corporate form through self-­employment, etc. 3. This view of locating overlaps—­rather than merely commercializing what has been invented—­is an important distinction. The “linear model,” in which

186

entrepreneurial creativit y science creates something that engineering then makes concrete and companies then sell, has long been discredited empirically. See Stephen J. Kline and Nathan Rosenberg, “An Overview of Innovation,” in The Positive Sum Strategy: Harnessing Technology for Economic Growth, ed. Ralph Landau, Nathan Rosenberg, and the National Academy of Engineering (Washington, DC: Nabu Press, 2012), 275–307, for a review of the main ideas, and 275 for a strong statement about the problems of the linear model. 4. A discussion of the relative importance of improvements in existing goods and of the creation of new goods can be found in Timothy F. Bresnahan and Robert J. Gordon, eds., The Economics of New Goods (Chicago: University of Chicago Press, 1996). 5. There is a multidimensional continuum of types of creative output, from fundamental and basic science at one extreme to art (high or kitsch) at another to the “creative” people at advertising agencies at yet another extreme. 6. Patented inventions are supposed to be a hybrid, in which a single company gets exclusive rights to use of the invention for a period of time while the invention enters the knowledge stock of the whole economy. Often, of course, there is either related nonpatentable knowledge that is not made public (“how-­to” knowledge) or some other incompleteness, and part of the invention remains known only privately. 7. Joel Mokyr in The British Industrial Revolution: An Economic Perspective (Boulder, CO: Westview, 1999) has made the very important point that comparatively easy access to scientific and engineering knowledge in Britain helped spur the industrial revolution there. Potential entrepreneurs in Britain had access to scientific and engineering knowledge through institutions that did not call for extensive technical schooling, unlike the comparatively rigid French system, which rigorously trained a few specialists (excellently!). The French system sparked considerable invention but, Mokyr argues convincingly, less entrepreneurship than the British one. 8. Much higher rates of technical progress are possible for a short period of time and in economies that are catching up with world leaders. 9. Classic sources include Ernest Braun and Stuart Macdonald, Revolution in Miniature: The History and Impact of Semiconductor Electronics (Cambridge: Cambridge University Press, 1978); Paul Freiberger and Michael Swaine, Fire in the Valley: The Making of the Personal Computer, 2nd ed. (New York: McGraw-­Hill, 2000); and Martin Campbell-­Kelly, From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry (Cambridge, MA: MIT Press, 2003), the latter book being particularly valuable on software innovation. 10. See, inter alia, AnnaLee Saxenian, Regional Advantage: Culture and Competition in Silicon Valley and Route 128 (Cambridge, MA: Harvard University Press, 1994), 31, for this history.

187

timothy bresnahan 11. See Gordon Moore and Kevin Davis, “Learning the Silicon Valley Way,” in Building High-­Tech Clusters: Silicon Valley and Beyond, ed. Timothy F. Bresnahan and Alfonso Gambardella (Cambridge: Cambridge University Press, 2004), 7–39. 12. See ibid. I agree with Moore’s assessment that this invention was “second only to the invention” of the IC itself. 13. See A. Michael Spence, “The Learning Curve and Competition,” Bell Journal of Economics 12, no. 1 (1981): 49–70, for the pricing implications and a competitive analysis. 14. J. Alois Schumpeter, Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process (New York: McGraw-­Hill, 1939), 88. 15. The classic management model is D. Levinthal, “Adaptation on Rugged Landscapes,” Management Science 43, no. 7 (1997): 934–50. A wide literature is cited in Timothy F. Bresnahan, “Recombination, Generality, and Re-­Use,” in The Rate and Direction of Inventive Activity Revisited, ed. Josh Lerner and Scott Stern (Chicago: University of Chicago Press for the National Bureau of Economic Research, 2012), 611–56. 16. More recently, the creation of computer-­aided design tools that can interact with computer-­aided manufacturing tools permit an IC designer to hire a manufacturer to build a particular design. At the time of the industry’s founding, however, the market in manufacturing services did not yet exist, as the technical progress and commercial innovation that would eventually enable it were themselves enabled by the invention and commercialization of the IC. 17. Freiberger and Swaine cover this well in Fire in the Valley. 18. See ibid. 19. Interview in Personal Computing (November–December 1977): 59.

Bibliography Braun, Ernest, and Stuart Macdonald. Revolution in Miniature: The History and Impact of Semiconductor Electronics. Cambridge: Cambridge University Press, 1978. Bresnahan, Timothy F. “Recombination, Generality, and Re-­Use.” In The Rate and Direction of Inventive Activity Revisited, edited by Josh Lerner and Scott Stern, 611–56. Chicago: University of Chicago Press for the National Bureau of Economic Research, 2012. Bresnahan, Timothy F., and Alfonso Gambardella, eds. Building High-­Tech Clusters: Silicon Valley and Beyond. Cambridge: Cambridge University Press, 2004. Bresnahan, Timothy F., and Robert J. Gordon, eds. The Economics of New Goods. Chicago: University of Chicago Press, 1996. Campbell-­Kelly, Martin. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. Cambridge, MA: MIT Press, 2003.

188

entrepreneurial creativit y Freiberger, Paul, and Michael Swaine. Fire in the Valley: The Making of the Personal Computer. Second ed. New York: McGraw-­Hill, 2000. Griliches, Zvi. R&D and Productivity: The Econometric Evidence. Chicago: University of Chicago Press, 1998. Hayek, F. A. “Economics and Knowledge.” Economica IV, new series, no. 13 (1937): 33–54. Kline, Stephen J., and Nathan Rosenberg. “An Overview of Innovation.” In The Positive Sum Strategy: Harnessing Technology for Economic Growth, edited by Ralph Landau, Nathan Rosenberg, and the National Academy of Engineering, 275–307. Washington, DC: Nabu Press, 2012. Levinthal, D. “Adaptation on Rugged Landscapes.” Management Science 43, no. 7 (1997): 934–50. Mokyr, Joel. The British Industrial Revolution: An Economic Perspective. Boulder, CO: Westview, 1999. Moore, Gordon, and Kevin Davis. “Learning the Silicon Valley Way.” In Building High-­Tech Clusters: Silicon Valley and Beyond, edited by Timothy F. Bresnahan and Alfonso Gambardella, 7–39. Cambridge: Cambridge University Press, 2004. Nelson, Richard R., and Sidney G. Winter. An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press, 1982. Saxenian, AnnaLee. Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Cambridge, MA: Harvard University Press, 1994. Schumpeter, J. Alois. Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. New York: McGraw-­Hill, 1939. Spence, A. Michael. “The Learning Curve and Competition.” Bell Journal of Economics 12, no. 1 (1981): 49–70.

189

cha p t er ni n e

Scientific Breakthroughs and Breakthrough Products Creative Activity as Technology Turns into Applications Tony Hey and Jonathan Hey

Most of us would say that we can recognize creativity when we see it, even though creative output and activity varies enormously across all domains. We are able to recognize creativity in a scientific breakthrough, an incredible new product, or a solution to an everyday problem. But what is it that we are identifying, and what did it take for the creator to get there? What are the different creative mechanisms involved in unraveling nature versus launching the Toyota Prius? In this essay we take a closer look at creative breakthroughs in progressively more applied domains, from pure science, to the commercializing of technology research, to the creation of successful new technology products. In the process we learn more about the creative activity involved. We begin with the case of pure science.

Pure Science More scientists are alive now than have lived during all previous human history, so estimates say. Yet the popular history of science is a story of a relatively small number of exceptional individuals. Clearly the storybook version of the progress of science is an oversimplification; many 191

tony hey and jonat han hey

breakthroughs were built on the painstaking incremental research of numerous unfamiliar researchers. But many examples demonstrate that dramatic progress in research was the result of an exceptionally determined and focused individual who had the vision, persistence, and technical skills to make a difference. Albert Einstein and his paper on special relativity published in 1905 is a well-­known example. At the time, Einstein was not even in academia but was toiling in isolation from the professional physics community in the patent office in Bern. In fact, an unsung hero from the academic establishment had a hand in Einstein’s emergence. Max Planck, who himself had sowed the seeds of the quantum mechanical revolution in 1900, was the perceptive reviewer who recommended that Einstein’s revolutionary paper be accepted for publication in a respectable physics journal, despite its unconventional origin and very sparse number of references. With its author’s subsequent celebrity in the scientific community and eventually among the general public, Einstein could easily have become distracted from physics. That he managed to maintain his focus and achieve an even more impressive accomplishment with his theory of general relativity in 1916—­which gave us an entirely original perspective of gravity, matter, and the nature of space-­time—­is truly impressive. Remarkably, both the special and the general theories of relativity are required for the GPS satellite technology on which we now rely. Einstein’s discovery of relativity is a poster child for exceptional creativity. A more typical example would be a less well-­known but equally remarkable breakthrough made by two Dutchmen, Martinus Veltman and Gerard ’t Hooft in the 1970s. At this time it had long been known how to “tame the infinities” appearing in calculations of the relativistic quantum field theory that describes the electromagnetic forces. Physicists Richard Feynman, Julian Schwinger, and Sin-­Itiro Tomonaga were awarded the 1965 Nobel Prize in physics “for their fundamental work on quantum electrodynamics.” A long-­standing problem in particle physics was how to understand the other funda192

scien tific breakthroughs and b reakthrough products

mental forces in nature. The technical achievement of Veltman and ’t Hooft was to show how one could also make sense of generalizations of quantum electrodynamics that could describe both the weak and the strong forces of nature. Using a novel technique for regularizing the infinities that appeared in the calculations, they were able to demonstrate the existence of a consistent “renormalization scheme” for the non-­Abelian gauge field theories. This discovery was hugely significant since it rapidly led to establishment of a unified theory of the weak and electromagnetic interactions and the first credible candidate theory for the strong nuclear interactions.1 At the time of their breakthrough, the physics community, as defined by leaders of the field such as the Nobel Prize winners T. D. Lee and Abdus Salam, were exploring exotic alternatives to non-­Abelian gauge field theory, such as introducing a “mild” violation of causality or using field theories with nonpolynomial Lagrangians, as a possible solution to the weak interaction puzzle. Only Veltman’s exceptionally determined belief in the importance of Yang-­Mills gauge theories led him and his PhD student ’t Hooft to continue striving to understand what fellow theoretical physicist Sydney Coleman once called a “forgotten corner” of theoretical physics.2 The examples of Einstein and Veltman/’t Hooft required individual dedication, technical ability, and focus, far beyond the ordinary, in order to achieve their breakthroughs. However, the same qualities are required, if perhaps not to the same degree, by thousands of researchers, working individually or in small collaborations, who make the many necessary but comparatively less creative contributions to what one might uncharitably call “filling out the details,” in preparation for the next breakthrough in our understanding of nature. It is also worth noting that Einstein, for all his theoretical brilliance, also had some industrial patents to his name, including one for a leak-­proof refrigerator pump in collaboration with Leo Szilard. His contributions to technical innovation were undoubtedly creative in some sense, although many other individuals could have made similar innovations. 193

tony hey and jonat han hey

Very Large Collaborations: The Emergence of Crowdsourcing What of larger collaborations in scientific research? While large-­scale collaborations such as those at CERN’s Large Hadron Collider, or the teams attacking the mapping of the human genome, can be exceptionally numerous, the norm in science is still individual researchers or small collaborations. However, in recent years there has been much discussion of a looser and larger type of collaboration—­usually called “crowdsourcing”—­using what is sometimes termed the “wisdom of the masses.” Crowdsourcing operates by providing a Web platform that gathers and combines contributions from huge numbers of people. The collective effort and intelligence, it is argued, will ultimately result in a more complete and successful output than that of a few dedicated and highly talented individuals. In this sense, then, we have an approach to creativity that is the opposite of the creativity exhibited by Einstein working alone in his patent office. A number of recent books explore the virtues of crowdsourcing, often based on the success of Wikipedia and the open-­source software movement.3 At first glance, Wikipedia seems a compelling example, but on a closer look, it becomes less clear that its success can be replicated or ultimately sustained. Although anyone has the right to edit a Wikipedia entry, Wikipedia’s founder, Jimmy Wales, discovered in 2005 that the active community of primary contributors was made up of just several hundred users. Over 50 percent of edits were performed by just 0.7 percent of extremely dedicated people, while the vast majority of users made only minor fixes. Indeed Wales found that 2 percent of Wikipedia users account for more than 70 percent of edits.4 Similarly, the promotion of open-­source software as an altruistic alternative to commercial software ignores the fact that many of the most successful open-­source projects such as Linux and Mozilla are heavily supported by industry as a business strategy. By contrast, the vast majority of academic open-­source projects on SourceForge, a site for 194

scien tific breakthroughs and b reakthrough products

open-­source software development and sharing, have failed to attract a self-­sustaining community. In parallel to these collaborative development efforts, a number of “open-­innovation” platforms5 have been launched to explicitly harness the creativity of the masses. These platforms directly tap and coordinate the labor and ideas of others. They range from Amazon’s Mechanical Turk service (farming out small tasks) and Innocentive (matching problem solvers and problem seekers) to OpenIDEO and an earlier offering, ThinkCycle, which aim to tackle larger, complex issues. For instance, a problem launched by the chef Jamie Oliver on the OpenIDEO site posed the challenge of how to raise kids’ awareness of the benefits of fresh food so they can make better choices. The long-­term success of such initiatives remains to be seen, though their popularity seems to be growing. As a counterpoint to the crowdsourcing approach advocated above, Jaron Lanier, in his book You Are Not a Gadget, argues that the Web 2.0 “hive mentality”—­which enthusiastically encourages crowd creativity and the mashing up of all available content for free—­poses a serious threat to the ability of journalists, musicians, and professional writers to earn a living. At the same time, it has encouraged the devaluation of the words “creativity,” “wisdom,” and “friend” on social networking sites. For Lanier, the hive mentality diminishes respect for individual creativity and authorship and promotes the division of information into ever more reusable, bite-­sized chunks. It lacks a holistic, humanistic perspective that had previously been provided by passionate individuals devoting their lives to a specific area of knowledge. So while crowdsourcing has its place in tapping creative input from the many, this kind of very large-­scale collaboration has dangers as well as sweet spots. Crowdsourcing is unlikely to enable the kind of breakthroughs that Einstein and Veltman/’t Hooft made. As a journalist asked, “A problem shared is a problem halved, goes the old saying. But what happens if you share a problem with a million people? Are you left with a millionth of a problem? Or just lots of rubbish suggestions?”6 195

tony hey and jonat han hey

The crowdsourcing model fails to capture the kind of deep thinking and dedication that is required for scientific breakthroughs.

Commercializing Technology Research The ideas of the hive mentality and crowdsourcing are a direct outgrowth of the Internet and the World Wide Web. The original idea of hypertext links was introduced by Ted Nelson in his pioneering Xanadu project. Hypertext was taken up enthusiastically by the computer science community, but it took an ex-­physicist, Tim Berners-­Lee, to come up with both a vision and a simple and practical way to implement hypertext links in the World Wide Web. Unlike the hypertext research community, Berners-­Lee recognized the value of an engineering compromise in which links were allowed to fail, thereby permitting the Web to scale to the size it is today.7 In this case, a pragmatic approach and an emphasis on transforming the research vision step by step into a practical system provided the breakthrough. We believe that bringing the Web to the world or, more generally, taking computer science research out of the lab, required a type of creative mentality different from that needed in pure science research. In our experience, only a small minority of academics are both very creative in their research and possess the entrepreneurial qualities and drive to launch a viable business from a research lab. Clearly not all the outputs of basic research are best suited to starting a company; nevertheless, commercializing research and creating new companies are two of the primary goals of governments around the world. If successful, such commercialization can provide income and positive publicity for a research institution. For industrial research labs such as Microsoft Research, the goal is not to spin out new companies but to generate new features and new products and provide Microsoft with the necessary agility to meet new competitive challenges. As with the case of Veltman and ’t Hooft, entrepreneurship in bringing technology out of the research lab requires dedication and passion, but unlike pure 196

scien tific breakthroughs and b reakthrough products

research, it also requires flexibility and the willingness to compromise and adapt to the market’s messy realities. A primary reason for this messiness is the presence of a customer for whom a new technology needs to provide sufficient benefit. Without customers, a technology cannot be commercialized. Consider the challenges the research team faced in developing the ability to make very small wireless sensor technology.8 The engineering vision was compelling: wireless sensors should be made small enough and cheap enough to be spread around like sand. The sensor software would self-­configure a highly robust network, fusing data signals to provide feedback on light, temperature, gas levels, and a host of other possible measurements. The network would be robust because if one sensor were to fail, another ten, a hundred, or even a thousand would be able to take its place. This was immediately appealing in a military context and also had enormous potential for commercial applications. After demand for early versions of the wireless boards from other researchers increased, the research team decided to launch a business. But the path to profit was not a simple one. Though the vision captivated the media and prospective buyers, the team soon discovered that small was not the most pressing need for those in the chosen market of industrial monitoring. Potential clients cared mostly about reliability, compatibility with industry standards, ease of use and installation, and low power consumption. While the new company was founded on the exciting technology vision of tiny sensors, the team soon found itself forced to adapt. To sell to the chosen market, benefits other than small size had to be offered. The change in direction was difficult for the team, requiring a balance of flexibility, persistence, and many years of work. Microsoft’s Kinect sensor technology for the Xbox provides another instructive example. The story is a complex blend of research, development, and business strategy in which multiple contributions came together to enable the launch of a breakthrough product. The first element was a big idea from Alex Kipman in the Xbox product group. The idea was not that real-­time body tracking could be used 197

tony hey and jonat han hey

to control a machine; that had been around for some time, not least because of the film adaptation of Philip K. Dick’s Minority Report. The real big idea was to bet that depth camera technology would tip us into a world where real-­time body tracking would become sufficiently accurate to build a real system. In other words, Kipman provided the belief and the vision that here was a problem that was worth working on and solvable. A second creative contribution came from the Xbox developers who assembled the first prototype tracker, a system called Bones. The Bones system provided an amazing demo, but would never run for longer than a few minutes because success at recognizing the following frame depended on success at recognizing the current frame. But researchers at Microsoft Research already knew this. In essence, the problem was framed by the researchers as the building of a “detection” rather than a “tracking” system. A third key contribution hinged on both planned and serendipitous research. Computer vision researchers had always wanted to solve general object recognition, something that even a young child can do easily but computers have so far found very difficult. A key piece of the puzzle came from Jamie Shotton, a researcher at Microsoft Research’s Cambridge Laboratory, who was working on fast object recognition and was given the task of solving fast human body pose estimation. Shotton found a way to make the two into the same problem. The serendipity stemmed from researchers working on topics for some time, largely out of curiosity, with results that turned out to be highly relevant to solving a real-­world problem. Other innovations in Kinect, such as the directional speech and identity recognition systems, came from different labs in Microsoft Research. Each contribution required long-­term investment in basic computer vision, machine learning, and speech recognition research. These research fields have advanced over the past decades with significant contributions from several major research groups around the world. Impressive in the case of the Kinect was how these different 198

scien tific breakthroughs and b reakthrough products

research groups in Microsoft Research, on different continents, were able to come together and work with the Xbox product group to realize the vision of “your body is the controller.” As we shall discuss later, this vision was well suited to the current market for gaming products. The above examples—­the World Wide Web, wireless sensor technology, and the Kinect sensor—­suggest that the path toward bringing technology to market is not a straightforward one. Berners-­Lee, working largely by himself at CERN, provided the engineering compromise that took hypertext out of the physics research community. The wireless sensor team began with a technology and a research vision and gradually adapted their offering to the market’s needs. The developers of the Kinect system required a number of individual contributions as well as group collaborations, a belief in the vision of a new type of controller, and a number of surprising engineering solutions to tough problems. Thus, the shift from developing technology through basic research to commercializing an application with customers requires the willingness to adapt research technology to the realities of implementation and the needs of the marketplace.

Consumer Technology Much of the source of delay in commercializing new technologies stems from overcoming challenges in implementation: making technologies reliable enough, safe enough, and affordable enough—­as with robotics and speech recognition, to name just two examples. Passing regulatory approval is also a major hurdle, for example, in the pharmaceutical industry. Yet this remains only part of the challenge. Creativity in the development and use of technology has received substantial attention from researchers aiming to systematize the seemingly haphazard nature of invention and creative problem-­solving. Advocates of creative problem-­solving aim to increase the likelihood of success by seeing problems from a different angle or encouraging the brain to make lateral leaps and seek out random connections, as 199

tony hey and jonat han hey

in brainstorming. At the opposite end of the spectrum, with more specialized and structured processes, are methods such as the Theory of Inventive Problem-­Solving (TRIZ).9 This method proves most valuable in situations known as “under-­the-­hood design.” The latter phrase is borrowed from car manufacture, in which, although it is vital to have functioning and hard-­wearing engine seals under the hood of a car, few people buy cars because of their engine seals. But the success of TRIZ also highlights the area where most creative problem-­solving methods fall short: not in their ability to provide new angles to attack technical problems, but in their failure to accommodate the needs, desires, and idiosyncrasies of customers.  Most customers do not care if something is new and different; they care if it meets their needs. Sometimes the very fact that offerings are new and different holds back their adoption by the majority. Conrad Wai and Peter Mortensen have written that the most successful new products should in fact be boring.10 In other words, new products should seamlessly fit into existing ways of living, rather than require people to change themselves for a new technology. For example, the Nike+ running system—­a product that allows runners to capture data about their running while on the move—­blended easily into what people were already doing, which was running while listening to music. The technology that makes it work remains behind the scenes, and the power of the product reveals itself gradually only after people have become accustomed to including it in their routine. The ability, then, to identify and adapt technology to meet people’s needs is a key driver in successful new product innovation. Studies of new product development show that a focus on framing a new product around unmet user needs improves the chances of its long-­lasting success.11 The primary means to identify and design effectively for customer’s needs is to develop empathy for the customer. One way to do this is to be them. For example, when assembling the team to compete in the console gaming market, Microsoft brought together an in-­house team that included gamers. As a result, it was easy for the designers of the 200

scien tific breakthroughs and b reakthrough products

original Xbox to relate to the needs of those who would use the product because, as one team member put it, “Hell, we were those guys.”12 Without empathy for the customers, a product is more likely to fail—­as happens with the vast majority of new technology products. Consider the case of the battery-­powered vehicle known as the Segway Personal Transporter. Before its official launch in 2001 the Segway was one of the most hyped inventions of all time. Its inventor, Dean Kamen, stated in Time magazine, “The Segway will be to the car what the car was to the horse and buggy.”13 But it didn’t pan out that way. The Segway has remained a novelty, not a successful product innovation; it has not yet achieved broad market acceptance, except in a few niche applications, many of which depend on the short-­lived appeal of the Segway’s novelty. While it is possible that the Segway may yet achieve its promise, how did the initial forecasts for it prove so misguided? The disappointing take-­up of the Segway is partly due to the focus of its designers on the technology rather than on the needs of customers. The initial tests of the Segway were carried out by members of the company and their close collaborators, rather than by target groups of potential customers. Indeed, much of the initial hype was due to secrecy about exactly what the project, dubbed Ginger, would turn out to be. In practice, the Segway provided mixed benefits, at best, to many of its intended buyers, such as the postal service and the police. The first mail carrier to deliver a piece of mail with a Segway had this reported experience: “[Chris] Pesa enjoyed trying out the device, but it didn’t save him any time: he couldn’t sort the mail between homes as he could when walking his route. And if it rained, it was impossible to carry an umbrella, because you needed both hands to steer.” The police force in the town of its development, Manchester (New Hampshire), “found the machines useful for downtown parking control,” but that was about it. The mountain bikes that the department employed to patrol certain beats were cheaper than Segways, and did not have batteries that ran out of juice.14 The self-­ balancing technology of the Segway is undoubtedly 201

tony hey and jonat han hey

impressive, but it was developed without a complete picture of the needs to be addressed. These were more than just the need to get from place to place more easily. In essence, the development team focused on the how of engineering implementation rather than first finding out about the what and the why of people’s needs.15 Yet even when new technologies are direct replacements of existing technologies, the path to replacing an incumbent technology can take time. Digital photography, for example, did not initially provide sufficient quality for professional photographers, who required high-­ quality, large-­scale prints. It was first taken up by more casual photographers, who appreciated its relative cheapness and flexibility. Only many years later, after significant improvements in the quality of digital processing, and increased capacity of storage cards, was digital photography able to compete with, and ultimately usurp, the place of film for professional photographers. Successful product development strategy makes use of the gradual shifting of market needs as technologies evolve and new applications are found by employing different strategies at different stages along the adoption curve.16 The time between a technology being developed in research and employed in commercial projects can be substantial. The myriad current applications of lasers in commercial use—­as laser pointers, measuring devices, mouse sensors, light shows, scanners, and so on—­were certainly not in the researchers’ minds as the first lasers were made to work in the research lab in the early 1960s. Much of the development work required was about finding a creative match between old problems and this new “solution looking for a problem” (as the laser fast became known). Such a perspective on invention has also been identified in studies where the creative work of designers is not associated with identifying and solving a specific problem, but emerges from a continuous co-­evolution of problem-­solution matches.17 A more recent example of successful technology application in the marketplace is Nintendo’s launch of the Wii Gaming system. While just as radical as the Microsoft Kinect sensor at launch, the Wii followed 202

scien tific breakthroughs and b reakthrough products

a different path through development. Rudimentary accelerometers have existed since the 1920s,18 and hand-­gesture controllers have been extensively used in lab environments and technology research. Nintendo adapted the existing technology to enable accelerometers reliably and robustly to monitor the movements of gaming controllers. The result was a new style of gaming. Yet just as important as the application of technology was Nintendo’s market focus on nongamers. While existing consoles such as the Xbox focused on improving hardware for ever more immersive games and sophisticated gamers, Nintendo aimed to sell to those who were traditionally nongamers. This group has been resilient to new console launches: the standard complaint from nongamers, when asked why they did not play, was that game controllers had too many buttons. The first major Nintendo controller, originally released in 1983, had four buttons and a four-­way directional control pad. A recent controller for the Xbox 360 console has two analog control sticks, two analog triggers, eleven digital buttons, and a digital directional control pad. The newer controller allows extraordinary control, but the complexity of the input device is too daunting for many. Yet nongamers regularly deal with TV and video remote controls with high numbers of buttons. So does the button complaint really make sense? It turns out that the primary challenge that nongamers experience is simultaneous, ambidextrous control—­the type of control that enables a pianist to play with both hands at the same time. A substantial part of the initial appeal of the Nintendo Wii, then, is that it allows a large amount of control to be assumed by the movement of the rest of the body, circumventing much of the challenge of mastering simultaneous ambidextrous control. Using accelerometers as an input mechanism limited the number of buttons needed on the controller while still enabling sophisticated input. To build on the new possibilities, marketing for the Wii was developed with the intention of bringing friends and family together, thus appealing to kids and grandparents alike: advertisements for the Wii explicitly included several generations to emphasize the family nature of the games. 203

tony hey and jonat han hey

In each of the previous examples, the products were founded on impressive and sound engineering. But with the laser, the market had yet to be identified, while with the Wii, much of the creativity involved in its success was the holistic integration of robust technology, an easy-­ to-­use interface, friendly and family-­oriented games, and a focus on the nongaming market. With the Segway, it could be argued that the market need that the technology truly fulfills has yet to be identified.

Design Is a Social Process Unlike exceptional creativity in science, bringing a technology to market can seldom be credited to one individual. It generally requires a creative framing of the situation, a matching of the technology to people’s needs, and the creativity of teams. While the media exalt the singular design visions of Steve Jobs and Jonathan Ive at Apple, Apple’s most successful products are the result not of individuals but of teams working together. Multidisciplinary development is more of a constraint to work with than a luxury to choose. Design, far from being a single insight into the application and development of a technology, is a social process involving multiple actors, each of whom brings different, complementary, and often conflicting viewpoints to the situation.19 While a team’s path can seem logical with hindsight, gaining agreement within a team on the key needs to be met and the market opportunity and strategy to be followed is always difficult and the source of many potential pitfalls.20 Ideas evolve, are altered by research, and are influenced by every team member. They are also a product of iteration from user testing, and interpreting and acting on customer feedback. As with the Xbox Kinect sensor, pointing to one idea as the key one that makes a new offering work is difficult. Social factors operating beyond the creative team also influence the outcome of successful technology development. Consider one of technology’s visionaries. Thomas Edison is often cited as the epitome of exceptional creativity in technology development and commercial204

scien tific breakthroughs and b reakthrough products

ization. It is tempting, as with Archimedes and his eureka moment, to imagine Edison in his lab suddenly visualizing invention after successful invention with sweat on his brow and customers flocking to the door. Edison was no doubt an exceptionally creative person, but several detailed analyses have highlighted key factors in his successes and failures that go beyond any traditional view of his personal inventive and creative problem-­solving talents. To begin with, separating the output of Edison as an individual from that of the Edison organization is difficult. A long-­term assistant of his remarked that “Edison is in reality a collective noun and refers to the work of many men.”21 At one point in its history over two hundred people were actively developing innovations for Edison’s organization. Andrew Hargadon, a researcher in innovation and entrepreneurship, argues that Edison was also an expert at leveraging an extensive network to manage cross-­domain innovation; he repeatedly transferred inventions from one domain into another. Hargadon views this ability as “technology brokering,” in which successful innovators manage networks that span multiple industries. This reach allows them to identify opportunities to apply existing technologies in one industry that could become breakthrough innovations in another.22 In addition to the importance of leveraging a network across industries, technology historian W. Bernard Carlson argues that “inventors invent both artifacts and frames of meaning that guide how they manufacture and market their creations.”23 As such, inventors like Edison combine technical and social solutions as a new product is realized and launched. The success of a product relies on the matching of these two. In Edison’s case, Carlson argues that he belonged to the production culture and values of the late nineteenth century, valuing hard work and the production of goods for useful purposes. Edison also had consistently more success in selling his inventions to businesses rather than to the consumer market. When Edison began full-­scale manufacture of the phonograph, he continued with his tried-­and-­ tested business focus, seeing the phonograph as a business dictating 205

tony hey and jonat han hey

machine. Although other competitors later began to sell the phonograph with music and entertainment as its primary purpose, Edison only reluctantly agreed for his invention to be sold for amusement purposes, nearly twenty years after its initial launch. Carlson believes this reluctance hinges on Edison’s producer values, which did not place value on inventions for seemingly trivial purposes. Invention of a technology, even with an application in mind, is not always enough for commercial success. Technology exists within a social and cultural context, and requires adept leveraging of a network of resources and people for its realization, plus a creative match between a technology and its use.

Conclusions We have examined the differences in creative activity across the related domains of science, technology, and new product development. As illustrated by the Kinect sensor, each of these domains needs to be closely linked in the process of launching a successful new product. Yet the nature of the creative activity that results in scientific breakthroughs is quite distinct from that resulting in breakthrough products. Because many breakthroughs in pure scientific research require dedication, deep thinking, technical skills, and great perseverance, we do not see an approach such as crowdsourcing as a strong candidate for exceptional creativity in science. In most cases, exceptional creativity will continue to rely on the creativity of passionate individual scientists. In contrast, commercializing research requires an understanding of market needs, a willingness to be flexible, and the ability to tame technology to the messy realities of use. The cases of wireless sensor technology and the Kinect sensor highlighted the complex path often followed in bringing technologies from research to market in the form of new products. Rather than the individual contributions made by scientists like Veltman, ’t Hooft, or Einstein, the Kinect required many contributions from researchers at multiple research labs tackling 206

scien tific breakthroughs and b reakthrough products

many different aspects of the problem. Moreover, consumer technology products require the creativity of teams made up of multiple disciplines. Creative insights often concern a novel framing of the situation that provides a match between an unmet customer need and a technology that can help satisfy it. Finally, a development team also needs to take into consideration the social context into which a new product is launched. The form that creativity takes and how it is expressed in the making of breakthrough products is therefore quite distinct from creativity in breakthrough science.

Notes 1. That is, the Glashow-­Salam-­Weinberg theory for the electro-­weak interactions, and the theory of quantum chromodynamics for the strong interactions. 2. Personal communication from Martinus Veltman to Tony Hey, 1976. 3. See, for example, Don Tapscott and Anthony D. Williams, Wikinomics: How Mass Collaboration Changes Everything (London: Atlantic Books, 2008); Jonathan Zittrain, The Future of the Internet and How to Stop It (New Haven, CT: Yale University Press, 2008); Clay Shirky, Here Comes Everybody: The Power of Organizing without Organizations (New York: Penguin, 2008); and Charles Leadbetter, We-­Think: The Power of Mass Creativity (London: Profile, 2008). 4. This information is from a talk given by Jimmy Wales at the School of Information Management, University of California, Berkeley, on November 3, 2005. The picture is, however, less clear when one considers who provided the bulk of Wikipedia’s content, with nonregular users often providing large chunks of information that are then “tidied” by Wikipedia’s regulars. 5. Henry Chesbrough, Open Innovation: The New Imperative for Creating and Profiting from Technology (Boston: Harvard Business School Press, 2003). 6. Tom de Castella, “Should We Trust the Wisdom of Crowds?” BBC Magazine, July 2010, http://news.bbc.co.uk. 7. Tim Berners-­Lee, with Mark Fischetti, Weaving the Web: The Past, Present and Future of the World Wide Web by Its Inventor (London: Orion, 1999). 8. Jonathan Hey, “Effective Framing in Design” (PhD dissertation, University of California, Berkeley, 2008). 9. Genrich Altshuller, The Innovation Algorithm, trans. Lev Shulyak and Steven Rodman (Worcester, MA: Technical Innovation Center, 1999). 10. Conrad Wai and Peter Mortensen, “Persuasive Technologies Should Be Boring,” Lecture Notes in Computer Science 4744 (2007): 96-­99, http://dx.doi .org/10.1007/978-­3-­540-­77006-­0_12.

207

tony hey and jonat han hey 11. Billie Jo Zirger and Modesto A. Maidique, “A Model of New Product Development: An Empirical Test,” Management Science 36, no. 7 (1990): 867–83; Roy Rothwell, Factors for Success in Industrial Innovations: Project SAPPHO: A Comparative Study of Success and Failure in Industrial Innovation (Brighton: University of Sussex, 1972); and Sara L. Beckman and Michael Barry, “Innovation as a Learning Process: Embedding Design Thinking,” California Management Review 50, no. 1 (2007): 25–56. 12. Dev Patnaik, with Peter Mortensen, Wired to Care: How Companies Prosper When They Create Widespread Empathy (London: Financial Times/Prentice-­ Hall, 2009). 13. John Heilemann, “Reinventing the Wheel,” Time, December 2, 2001, http:// www.time.com. 14. Gary Rivlin, “Segway’s Breakdown.” Wired, March 2003, http://www.wired. com. 15. The Segway faced other challenges, too, which reduced sales, including a flawed sales model (launching on Amazon.com where people could not try the Segway out) and difficulties with local regulations, for example, getting permission for a Segway to be used on sidewalks. For a thorough discussion, see Steve Kemper, Reinventing the Wheel: A Story of Genius, Innovation, and Grand Ambition (New York: HarperCollins, 2005). 16. Alonzo Canada, Peter Mortensen, and Dev Patnaik, “Design Strategies for Technology Adoption,” Design Management Review 18, no. 4 (2007): 32–41. 17. Kees Dorst and Nigel Cross, “Creativity in the Design Process: Co-­Evolution of Problem-­Solution,” Design Studies 22, no. 5 (2001): 425–37. 18. Patrick L. Walter, “The History of the Accelerometer 1920s–1996: Prologue and Epilogue,” Sound and Vibration 41 (2007): 84–92. 19. Louis L. Bucciarelli, “An Ethnographic Perspective on Engineering Design,” Design Studies 9, no. 3 (1988): 159–68. 20. Hey, “Effective Framing in Design.” 21. Mark Dodgson and David Gann, Innovation: A Very Short Introduction (Oxford: Oxford University Press, 2010). 22. Andrew B. Hargadon, How Breakthroughs Happen: The Surprising Truth about How Companies Innovate (Cambridge, MA: Harvard Business School Press, 2003). 23. W. Bernard Carlson, “Artifacts and Frames of Meaning: Thomas A. Edison, His Managers, and the Cultural Construction of Motion Pictures,” in Shaping Technology / Building Society: Studies in Sociotechnical Change, ed. Wiebe E. Bijker and John Law (Cambridge, MA: MIT Press, 1992), 176.

208

scien tific breakthroughs and b reakthrough products

Bibliography Altshuller, Genrich. The Innovation Algorithm, translated by Lev Shulyak and Steven Rodman. Worcester, MA: Technical Innovation Center, 1999. Beckman, Sara L., and Michael Barry. “Innovation as a Learning Process: Embedding Design Thinking.” California Management Review 50, no. 1 (2007): 25–56. Berners-­Lee, Tim, with Mark Fischetti. Weaving the Web: The Past, Present and Future of the World Wide Web by Its Inventor. London: Orion, 1999. Bucciarelli, Louis L. “An Ethnographic Perspective on Engineering Design.” Design Studies 9, no. 3 (1988): 159–68. Canada, Alonzo, Peter Mortensen, and Dev Patnaik. “Design Strategies for Technology Adoption.” Design Management Review 18, no. 4 (2007): 32–41. Carlson, W. Bernard. “Artifacts and Frames of Meaning: Thomas A. Edison, His Managers, and the Cultural Construction of Motion Pictures.” In Shaping Technology / Building Society: Studies in Sociotechnical Change, edited by Wiebe E. Bijker and John Law, 175–98. Cambridge, MA: MIT Press, 1992. Chesbrough, Henry. Open Innovation: The New Imperative for Creating and Profiting from Technology. Boston: Harvard Business School Press, 2003. De Castella, Tom. “Should We Trust the Wisdom of Crowds?” BBC Magazine, July 2010, http://news.bbc.co.uk. Dodgson, Mark, and David Gann. Innovation: A Very Short Introduction. Oxford: Oxford University Press, 2010. Dorst, Kees, and Nigel Cross. “Creativity in the Design Process: Co-­evolution of Problem-­solution.” Design Studies 22, no. 5 (2001): 425–37. Hargadon, Andrew B. How Breakthroughs Happen: The Surprising Truth about How Companies Innovate. Cambridge, MA: Harvard Business School Press, 2003. Heilemann, John. “Reinventing the Wheel.” Time, December 2, 2001, http://www. time.com. Hey, Jonathan. “Effective Framing in Design.” Ph.D. dissertation, University of California, Berkeley, 2008. Kemper, Steve. Reinventing the Wheel: A Story of Genius, Innovation, and Grand Ambition. New York: Harper Collins, 2005. Lanier, Jaron. You Are Not a Gadget: A Manifesto. New York: Knopf, 2010. Leadbetter, Charles. We-­Think: The Power of Mass Creativity. London: Profile, 2008. Patnaik, Dev, with Peter Mortensen. Wired to Care: How Companies Prosper When They Create Widespread Empathy. London: Financial Times/Prentice-­Hall, 2009. Rivlin, Gary. “Segway’s Breakdown.” Wired, March 2003, http://www.wired.com. Rothwell, Roy. Factors for Success in Industrial Innovations: Project SAPPHO: A

209

tony hey and jonat han hey Comparative Study of Success and Failure in Industrial Innovation. Brighton: University of Sussex, 1972. Shirky, Clay. Here Comes Everybody: The Power of Organizing without Organizations. New York: Penguin, 2008. Tapscott, Don, and Anthony D. Williams. Wikinomics: How Mass Collaboration Changes Everything. London: Atlantic Books, 2008. Valkenburg, Rianne, and Kees Dorst. “The Reflective Practice of Design Teams.” Design Studies 19, no. 3 (1998): 249–71. Wai, Conrad, and Peter Mortensen. “Persuasive Technologies Should Be Boring.” Lecture Notes in Computer Science 4744 (2007): 96–99, http://dx.doi. org/10.1007/978-­3-­540-­77006-­0_12. Wales, Jimmy. Speech at the School of Information Management, University of California, Berkeley, 2005. Walter, Patrick L. “The History of the Accelerometer, 1920s–1996.” Sound and Vibration 41 (2007): 84–92. Zirger, Billie Jo, and Modesto A. Maidique. “A Model of New Product Development: An Empirical Test.” Management Science 36, no. 7 (1990): 867–83. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale University Press, 2008.

210

cha p t er te n

A Billion Fresh Pairs of Eyes The Creation of Self-­Adjustable Eyeglasses Joshua Silver

In the developed world, more than 60 percent of the population use some form of vision correction—­either eyeglasses or contact lenses, or much less commonly, corneal refractive surgery. However, in the developing world, especially in rural parts that are not adequately served by optometrists—­such as sub-­Saharan Africa, where there may be only one optometrist for every 1 million people (compared with one for every five thousand people in the United Kingdom)—­only some 5 to 10 percent of people wear glasses. Estimates suggest that more than 100 million young people (ages twelve to eighteen) in the developing world need vision correction to function well in class—­in other words, to be able to read what is on the blackboard. If one assumes no great physiological difference between people in the developed and the developing worlds, then there is a total unmet need for vision correction worldwide of about 3 billion people. Since contact lenses and corneal surgery are probably not cost-­effective or appropriate for many people in the developing world, the solution would appear to be eyeglasses. But how can such large numbers of people be provided with eyeglasses of the correct prescription when they do not have access to an optometrist and cannot afford the price of conventional eyeglasses?

211

joshua silv er

One possible answer is a pair of inexpensive self-­adjustable eyeglasses, using a fluid-­filled chamber bounded by membranes as a lens, the basic principle of which I invented in the 1980s—­though there were precursor technologies going back at least a century, if not earlier, as I come to later. After some years of development and testing, over fifty thousand pairs of my self-­adjustable eyeglasses, so-­called Adspecs, are now being worn in some twenty countries. Very recently, a new prototype has been produced by Child ViSion, a collaboration between the U.K.-­based Centre for Vision in the Developing World in Oxford—­ which I helped to found in 2009—­and the U.S.-­based Dow Corning Corporation, a global leader in silicon-­based technology, based in Midland, Michigan. The first of our new generation of self-­adjustable eyeglasses should be distributed by the end of 2012, with a design intended to appeal aesthetically to the young people they are meant to help. One of the aims of the Centre for Vision in the Developing World is to ensure that 1 billion people will be wearing the eyeglasses they need, hopefully by the year 2020. If we can realize this vision, then it will truly be “a fantastic contribution to humankind”—­as a senior official of Britain’s National Health Service (NHS) remarked to me in 2012. By training I am an experimental atomic physicist. I was taught for my bachelor’s degree in physics at St. Catherine’s College, Oxford, by Neville Robinson, a most wonderful teacher and researcher, some of whose work in fact laid the foundation for at least one physics Nobel Prize. I carried on to do a doctorate in physics at St. Catherine’s and then at Christ Church, Oxford, in the early 1970s, and I have spent much of my academic research career working in Oxford at the Clarendon Laboratory, though I have also worked in labs in Europe, the United States, and Asia, such as the Freie Universität in Berlin, the University of California at Berkeley, and the University of Electro-­ Communications (UEC) in Tokyo. I have published many research papers and supervised a few dozen doctoral students, mostly seeking to establish together whether relativistic quantum mechanics and quantum electrodynamics can accurately predict the properties of one-­ 212

a b illion fresh pairs of eyes

electron and two-­electron atomic systems—­that is, hydrogenlike and heliumlike ions. However, since early in 1985, I have also pursued a rather different and absorbing interest in optics, which probably dates back to when I was a child in East London. As a ten-­year-­old, sometime in 1956, I played around with a plate, some aluminum foil, a ten-­centimeter ring made of phenolic resin, and some of my mother’s red nail varnish. With these ingredients I made my first variable optical device: a variable magnification “membrane” mirror. Using the nail varnish as glue, I attached a circle of foil to one side of the ring and the plate to the other. A hole in the middle of the plate allowed me to blow into the ring to change the shape, and thus the power, of the mirror. I remember it was not a very good mirror! Much later on, at Oxford, I was rather well trained in optics as part of my undergraduate degree at St. Catherine’s, and then further in my doctoral research in atomic spectroscopy by my supervisor, Derek Stacey. In 1976 I became a lecturer in physics at the university and also fellow and tutor in physics at New College, and ended up teaching and lecturing on optics (among other things) at the undergraduate level to physicists at Oxford, both at an elementary and at a somewhat advanced level—­a task that compels the lecturer to obtain a deeper level of subject understanding. All this background in optics turned out to be highly relevant to the creation of corrective eyewear of good optical quality. It gave me a significant advantage over anyone who might have tried to start similar work purely as an inventor, without any specialized training and expertise in optics. From my interactions with other inventors I suspect that they often underestimate the importance of formal training; perhaps the public does, too. The idea of making a self-­adjustable lens arose following a casual conversation over lunch with a friend in the Senior Common Room of New College in March 1985. He asked me if I knew how to make a variable-­focus lens, and I first told him no, that’s impossible—­but hold on a minute, ah yes, I can see how to do it—­at which point I made a sketch of my first such lens, which I still have. After lunch I went up 213

joshua silv er

to the Clarendon Lab, rummaged in the scrap box in our research workshop, found a suitable aluminum ring, drilled holes for a water feed and a seal, got a syringe from the lab stores, glued some very thin Mylar sheet to each side of the lens with Araldite, and hey presto, there was my first variable-­focus lens. When we look at an object, the optics of the eye throw an image of the object onto the retina. The brain considers the image, and if it does not appear to be sharply focused, then the brain sends a signal to the eye’s lens to change its shape so that the perceived image is clearer. For perhaps a third of us, as we look around, our lens can change shape (“accommodate”) over a sufficient range of focus that this process works very well, and we get clear vision—­but the other two-­thirds of us have a problem: our range of focus is not sufficient to achieve this best-­focus condition. We might suffer from myopia (nearsightedness), hyperopia (farsightedness), or presbyopia (a condition of aging that affects everyone over about forty-­five, in which our lens loses its flexibility, so that focusing close up becomes impossible). The conventional solution is to wear eyeglasses. But since eyeglasses have lenses of fixed focus, this solution is rather limited. So suppose for a moment that one could create eyeglasses with lenses of which the focus could be varied in the same general way as the eye’s lens. Such eyeglasses, if properly executed, could be rather more useful than eyeglasses of fixed focus. After that conversation in March 1985 I started to make more prototype variable-­focus lenses—­at first purely out of scientific curiosity. They were all fluid-­filled membrane lenses, where the surface of the lens was formed by a clear elastic polyester membrane, and the focal length of the lens was changed by changing the volume of fluid in the lens and, as a consequence, the pressure on the membrane. To make a good fluid-­filled membrane lens, one needs to stretch the membrane to remove wrinkles and also, separately, to seal it onto a carrier ring so that it does not leak fluid. As luck would have it, I had used a simple technique involving two flexible rubber O-­rings to seal and stretch polyester membranes in my atomic physics research going way back to 214

a b illion fresh pairs of eyes

the 1970s. I now applied this technique to improve my first prototype lens. A couple of months later, in May 1985, I designed a fluid-­filled membrane lens that used two thin polyester membranes, and two double O-­ring seals, to seal and stretch the membranes. This prototype lens had the feature that if it was filled with water and sealed onto a water-­ filled syringe, and the volume of water in the lens was then changed by pumping water in or out with the syringe, the surfaces of the lens changed their curvature, becoming either convex (using more water in the lens) and suitable for hyperopes or presbyopes, or concave (using less water in the lens) and suitable for myopes. I had thus created a simple lens of variable focal length, which could be controlled by a person moving the position of the plunger in the syringe. The natural thing to do with a lens is to look through it. Having filled my prototype lens with water from the attached syringe, I found that while looking through it, I could adjust its focal length to correct my 1.5-­diopter myopia extraordinarily well. This result immediately set me thinking at several levels. First, if I could accurately correct my own myopia with a very simple—­and inexpensive—­variable-­focus lens, could other people do the same? Second, I started to wonder how many people in the world actually need corrective eyewear. Putting those two thoughts together, I speculated on whether one could perhaps use such a self-­adjustable refraction procedure to give vision correction to those who need it. Hardly surprisingly, many other claims were made on my time at this point in my academic career (not to speak of my family). In 1993 I eventually produced a prototype pair of self-­adjustable eyeglasses, with the help of an excellent technician who used to work at the Clarendon Lab and who had his own workshop at home where he could machine parts that I had sketched. My design had two double O-­ring membrane lenses. It looked very odd because of the two syringes sticking out, a little like horns, so that my grandson Charlie called the glasses “cows”! The rather clunky physical appearance of these early self-­adjustable 215

joshua silv er

eyeglasses were undoubtedly a problem—­especially for self-­conscious teenagers—­but I am confident that the look of this eyewear will continue to improve very significantly. When, very early on in the venture, I read some of the background literature on variable-­focus lenses and eyeglasses, I came across many earlier devices with fluid-­filled variable-­focus lenses, and indeed other kinds of lenses of which the wearer could change the focal length. I discovered that this area of technology has been of interest since at least 1879, when a medical doctor in Paris, Dr. Cusco, described his “dynamoptometre” in a French patent and also published a report on it in 1880 in La Nature.1 Cusco’s device seems quite remarkable, even by modern standards, because the lenses could apparently be set to an accuracy of 0.01 diopters. It would appear to be the first recorded self-­ adjustable refraction device. Another one was developed in the 1960s by a second medical doctor, Martin Wright, working in the United Kingdom. It used fluid-­filled lenses, but incorporating a prescription lens, and was created for presbyopes to enable them to focus on close objects.2 Yet another, from the 1960s, was the work of the Nobel Prize– winning U.S. physicist and inventor Luis Alvarez. In his variable-­focus lens, the optical power variation was achieved by moving two suitably shaped optical components (rather than using membranes and a fluid).3 Although the Alvarez lens was not apparently used by Alvarez for eyeglasses, he licensed it to the Polaroid Corporation, which used it in their Spectra camera and also in the Humfrey refraction device. Much more recently, the Alvarez lens (probably more accurately called the Alvarez-­Lohmann lens to recognize the work of Adolf Lohmann, also in the 1960s) has been developed in four variants for use in self-­ adjustable eyeglasses.4 By 1994 I believed that fluid-­filled self-­adjustable eyeglasses could very probably answer the huge unmet need for vision correction in the developing world. If universal eyeglasses that wearers could adjust to their own prescriptions could be made at low cost, I thought this could

216

a b illion fresh pairs of eyes

offer a way to deliver “one size fits all” to the very large populations not served by eyecare professionals. So in February 1994 I contacted Bjorn Thylefors, director of the Programme for the Prevention of Blindness at the World Health Organization (WHO) in Geneva, to inquire about the global need for eyeglasses. In a letter I told him: I am a physicist and I have been developing ways of making adaptive lenses of good optical quality. I know from my own trials on myself that I can make adaptive lens spectacles which may be used to correct my own vision very well, and I believe it should be possible, using my technology, to manufacture such spectacles inexpensively for mass use, so that populations in the developing world could, for example, obtain useful vision correction without the expensive infrastructure which is normally associated with the eyecare industry in the developed world. Bjorn Thylefors told me that a working group of the WHO had estimated the global need for eyeglasses at about 1 billion pairs in 1987.5 He strongly encouraged me to pursue my ideas—­but to be sure to create eyeglasses that were not too expensive, costing perhaps about one dollar. “If you can do that, you should do it,” he told me, in our very first conversation. Following this first interaction, I visited Dr. Thylefors in early 1996, and he tried out a pair of the eyeglasses I had made in 1993, found that they worked quite well for him, and suggested I should run a trial in a developing world country. This led to a small field trial in Ghana in 1996 supported by the U.K. government’s Overseas Development Agency, and in particular their head of health and population, David Nabarro, using a new and much more wearable pair of eyeglasses, which he christened “Adspecs.” By now, I had replaced the water in the eyeglasses with Dow Corning silicone, a fluid with a higher refractive index than

217

joshua silv er

water. This fluid gave a thinner lens for a given power than water; it was also extremely inert, and compatible with the membrane and other lens materials. The results were rather encouraging, and they were presented at the sixth General Assembly of the International Agency for the Prevention of Blindness in Beijing in 1998 by my Ghanaian colleague George Afenyo and me; a report on the work, titled “Vision Correction with Adaptive Spectacles,” was published in the book World Blindness and Its Prevention.6 The Adspecs used in that first Ghanaian trial are now in the National Collection of the Science Museum in London, partly as a result of my demonstrating them on the BBC children’s television program Blue Peter. After the program was broadcast, the museum asked for the Adspecs. I was pleased to donate them. I had patented my first variable-­focus lens in 1985. As the project gathered some momentum, I decided that it could be helpful to obtain further patents for the new designs I was developing, although I should say that my attitude to patenting has from the beginning been somewhat ambivalent. The main driver for the project has always been my concern to produce high-­quality, low-­cost eyewear for the benefit of people in the developing world. But this raises the tricky question as to whether technology such as mine should really be protected by patents, or rather free to use. Comprehensive patent protection can be expensive, so I decided to proceed in a cautious way and obtain just some degree of patent protection. Such an approach leaves the inventor the freedom to grant licenses for use, and has the further advantage that it can prevent others from applying the technology only for profit. In general, I believe that it is not wrong for inventors to benefit financially from inventing technologies that people can genuinely afford, but that it is absolutely wrong for companies, institutions, or individuals to use their ownership of patents to prevent the poor or underprivileged from getting access to technologies, drugs, and so on that they may desperately need. As already mentioned, more than fifty thousand pairs of Adspecs 218

a b illion fresh pairs of eyes

have been distributed in over twenty countries. Some forty thousand of these have been acquired over time by a humanitarian assistance scheme started by a U.S. Marine major, Kevin White, who met me in 2005 and then obtained permission to buy Adspecs for the U.S. government. Later, after he left the Marines, Kevin set up Global Vision 20/20, a not-­for-­profit organization dedicated to improving people’s vision, which has worked in Africa. In one of many inspiring stories of individuals helped by Global Vision 2020, a widowed Liberian carpenter and father of nine, Arthur Walker, after losing much of his power of vision, was able to return to work as a carpenter by wearing a pair of correctly adjusted Adspecs. Why has it taken such an apparently long time—­more than twenty years from first concepts—­to see any significant number of adaptive eyeglasses being worn in the developing world? Part of the delay has been my stance all along that testing the self-­adjustable refraction procedure for both accuracy and acuity is crucial. By “accuracy,” I mean the accuracy to which a wearer can self-­adjust the focal length of the eyeglasses to correct their refractive error. By “acuity,” I mean the clarity or sharpness of vision a wearer can achieve with such a procedure. One must be sure, for ethical reasons, with a new procedure such as refraction with self-­adjustable eyeglasses, that it is sufficiently accurate. In other words, the procedure must actually work optically: at the end of the procedure, wearers must have corrected their refractive errors with reasonable accuracy, and they must also be able to achieve reasonable visual acuity. However, this turns out to be an important new and still-­pending area of vision research: debate continues over the best method for establishing a subject’s refractive error. Also, as yet, no global standard exists for acuity. Whereas in the United States and United Kingdom, an optometrist is expected to achieve so-­called 20/20 vision (6/6 vision in metric units) for a patient when correcting the patient’s refractive error, the WHO currently employs a significantly poorer standard for acuity worldwide. The WHO’s difficulty is that if they were to take the position that everyone in the world should be 219

joshua silv er

corrected to 20/20 vision, then vision correction would become the largest health need in the world. Another key point is that self-­adjustable refraction involves people doing for themselves something that a qualified practitioner has traditionally done for them. It would be understandable if practitioners took the position that the correct degree of refraction is best prescribed by the profession after the usual eye tests; and so research has been needed to demonstrate that self-­adjustable refraction is safe and accurate for the patient. Again, this area of vision research is important and evolving, as I indicate below. But how does an atomic physicist, starting with a knowledge of optics but with no ophthalmic or optometric training, set about testing the accuracy of self-­adjustable refraction? The answer can be found in the vision research papers I have published, starting with my first paper written jointly with George Afenyo, the optometrist from the Ghanaian government with whom I worked on the 1996 trial of the first Adspecs. My second and third papers on vision, which described a collaborative piece of research and were published in 20037 and 2004,8 describe the results of a second, rather larger field study in four countries of how more than two hundred adults could self-­refract. I quote from the 2004 paper in order to give an idea of the field methods used: The experiments were performed in South Africa, Ghana, Malawi and Nepal and the results collated. A total of 213 participants between the ages of eighteen and sixty were selected by agencies in each country and communicated with through an interpreter. Distance visual acuity was measured with either a standard Snellen chart or an illiterate E-­chart positioned at six meters. Because of the nature of the location some of the eye tests were performed outside in daylight and so illumination was subject to variation. The unaided vision data for each subject was recorded for each eye while occluding the fellow eye. For those subjects 220

a b illion fresh pairs of eyes

who could not read an entire line, the number of unread letters, N, on that line was also recorded. Then, using the conventional trial frame method, an optometrist determined the refraction for each subject, recording the spherical and cylindrical correction for each eye. The vision test procedure was repeated with the same chart to obtain the subjects’ acuity using trial lenses. The trial lenses were removed and the subject was asked to relax their eyes by looking at a distant target. A visual target was chosen at a distance greater than six meters and the subject was then asked to wear the Adspecs and to carry out the following adjustment protocol. Both left and right Adspec lenses were initially set to +6 diopters before they were worn to provide sufficient fogging to eliminate unwanted accommodation. The subject’s left eye was occluded and the subject then asked to adjust the right lens, slowly decreasing the power until the target came into sharp focus. The right eye was then occluded and the left eye revealed. The subject was subsequently asked to adjust the left lens in the same manner until the target was again in focus. Then, when viewing the target binocularly, the subject was asked to adjust the power and go slightly past the point of sharpest focus until the image began to blur and then turn the dial backwards slightly to achieve the sharp focus again. This is based on our observation that best acuity could be achieved if a final fine adjustment was made binocularly, i.e., when the vergence system functions. Following this, the subject was given an acuity test using the same chart as in the preceding tests. The binocular acuity obtained whilst using the Adspecs was recorded. The Adspecs were then removed and the spherical power of each lens was determined using either a Pentax OLH-­10 (Pentax Corporation, Tokyo, Japan) or a Topcon (Topcon Corporation, Tokyo, Japan) focimeter.9 221

joshua silv er

This study showed that adults self-­refract rather well with the Adspecs—­something that was later confirmed in her master’s degree research by Kyla Smith from the New England College of Optometry.10 Don Bundy of the World Bank in Washington, DC, has known about my work on vision for a long time, ever since we were introduced by a colleague, Michael Wills, in the 1990s. Don has a particular interest in children’s health and education, and he was keen for me to investigate whether a self-­adjustable refraction approach might be useful for children. As early as 2003, in fact, I wrote to Don to ask for World Bank funding to research self-­refraction by children, and our dialogue led to a meeting on child vision at Wolfson College, Oxford, in July 2007. That meeting decided it would be interesting to study whether children could self-­refract, as determined through a study of myopic teenagers. The World Bank then provided support for three studies of myopic teenagers, two in China and one in the United States. The results of the two Chinese studies have now been published, the first in Ophthalmology,11 the second more recently in the British Medical Journal,12 while the results of the Boston study are being written up for publication now. The results from this first Child Self-­Refraction Study have shown that myopic teenagers can self-­refract with surprisingly good accuracy, with well over 90 percent achieving an acuity of 6/7.5. Indeed the research has delivered the rather intriguing result that some of the teenagers appear to see rather better when they self-­refract than their apparent residual refractive error should allow. This unexpected discovery must be followed up both from the point of view of academic curiosity and because of its possible practical importance. Astigmatic correction is another area for researchers’ active discussion. Self-­refracting glasses cannot currently correct astigmatism, but how important is astigmatic correction—­the correction of the cylindrical component of the patient’s refractive error—­to visual performance, that is, the performance of visual tasks? For most people, if you correct just their spherical refractive error (technically, their

222

a b illion fresh pairs of eyes

“equivalent sphere”), they can perform adequately many tasks that need good vision, such as driving, reading, and so forth. Perhaps if people truly needed cylindrical correction, evolution would have taken care of it—­that is, the ciliary muscle of the eye would have evolved so as to change the cylinder as well as the sphere of the lens. Actually, the recent World Bank–supported Child Self-­Refraction Study shows that the great majority of our subjects had good acuity after self-­refraction, even though we did not make a cylindrical correction. Opticians tend to emphasize the “need” to correct astigmatism, but refractive error statistics suggest that perhaps only 10 percent of people will have problems if astigmatism is not corrected. The advantages and disadvantages of self-­adjustable refraction are a live issue. An audience of health professionals at the National Health Service’s Innovation Expo in London in March 2011, following my demonstration of the Adspecs design of eyeglasses, voted self-­ adjustable refraction to be “The idea most likely to make the biggest impact on health care by the year 2020.”13 The British Medical Journal thought that the Chinese study was sufficiently interesting to refer to it on the front cover of its print edition in August 2011. One might have thought that it would be straightforward to find research funding to establish a deeper understanding of the self-­adjustable refraction process, and indeed to optimize the self-­adjustable refraction procedure itself. Yet, a request made in 2012 to the UK government’s Department of Health elicited the surprising, if not indeed perhaps astonishing, statement that “the Department of Health and its research programmes and funding are focused on the needs of the NHS and patients and public in this country. So whilst we can see that this technology might be very useful for developing countries, it is not something we can pursue.” Now one might perhaps think that such a response is not very scientific, so I requested that the apparent blanket decision not to fund this important research be reconsidered. This then led to the further surprising statement from a DoH official:

223

joshua silv er

We appreciate that what works to correct children’s vision in the Third World should, optically, work in the UK as well, and I can understand you would aspire to see the technology applied more widely. However, as the Centre for Vision in the Developing World’s own information admits, “current . . . designs are not generally recognized for their aesthetic beauty.” At the present time, it would be difficult to justify evaluating the technology in the U.K. Inventors and innovators have always faced such hurdles. Government bodies have their own political and economic imperatives for supporting or ignoring new technologies. Since clear vision is so important for the health, education, economic activity, and general quality of life of very large numbers of people around the world, we shall of course continue to research self-­adjustable refraction, so as to improve the procedure and the devices themselves. Over the now quite lengthy period during which I have worked on vision, self-­adjustable eyewear has always seemed a new technology that is bound to grow. It has been shown to work well optically, and also because it can offer great efficiencies in the delivery of corrective eyewear. Although the average cost of eyeglasses to the consumer in the United Kingdom is something over a hundred pounds delivered in the conventional way, Adspecs self-­adjustable eyeglasses cost only around ten pounds and can be delivered directly to the wearer, who can then set them for themselves, unaided by a professional, using a simple procedure. The Adspecs are only a first, and still relatively rudimentary, device, which is certainly capable of significant improvement, especially if such improvement is not restricted to one particular sort of lens. Of course, whether self-­adjustable refraction eyewear deriving from my original 1985 idea actually achieves the target I have suggested earlier—­a billion people wearing the eyeglasses they need by the year 2020—­is something that only time will tell.

224

a b illion fresh pairs of eyes

Acknowledgments I would like to thank Lawrence Jenkin for providing the original frame for the Adspecs, which is based on the “Boston” frame that he designed, and George Afenyo, Anthony Carlson, Nathan Congdon, David Crosby, Mehdi Douali, Leon Ellwein, Kate Griffin, Mingguang He, Graeme Mackenzie, Bruce Moore, Michael Wills, and Chris Wray.

Notes 1. “Lentille à Foyer Variable du Docteur Cusco,” La Nature, no. 568 (1880): 55. A version of this article appeared in English as “A Lens with Variable Focus,” Scientific American, 43, no. 9 (August 28, 1880): 131. 2. B. M. Wright, “Variable Focus Spectacles,” Transactions of the Ophthalmological Societies of the United Kingdom 98 (1978): 84–87. 3. L. W. Alvarez, “Development of Variable Focus Lenses and a New Refractor,” Journal of the American Optometric Association 49, no. 1 (1978): 24–29. 4. A. W. Lohmann, “A New Class of Varifocal Lenses,” Applied Optics 9 (1970): 1669–71. 5. The Provision of Spectacles at Low Cost (Geneva: World Health Organisation, 1987). 6. G. D. Afenyo and J. D. Silver, “Vision Correction with Adaptive Spectacles,” in World Blindness and Its Prevention, vol. 6, ed. R. Pararajasegaram and G. N. Rao (Hyderabad: International Agency for the Prevention of Blindness, 2001), 201–8. 7. J. D. Silver et al., “How to Use an Adaptive Optical Approach to Correct Vision Globally,” South African Optometrist 62, no. 3 (2003): 126–31. 8. M. G. Douali and J. D. Silver, “Self-­Optimised Vision Correction with Adaptive Spectacle Lenses in Developing Countries,” Ophthalmic and Physiological Optics 24 (2004): 234–41. 9. Ibid., 236. 10. K. Smith, E. Weissberg, and T. G. Travison, “Alternative Methods of Refraction,” Optometry and Vision Science 87 (2010): 176–82. 11. M. He et al., “The Child Self-­Refraction Study Results from Urban Chinese Children in Guangzhou,” Ophthalmology 118, no. 6 (2011): 1162–69. 12. M. Zhang et al., “Self Correction of Refractive Error among Young People in Rural China: Results of Cross Sectional Investigation,” British Medical Journal 343 (2011): 407 (summary); full article at http://www.bmj.com.

225

joshua silv er 13. The demonstration, titled “Medical Innovations: Adaptive Eyewear,” can be seen at http://www.youtube.com/watch?v=r1rqgvbs9GQ.

Bibliography Afenyo, G. D., and J. D. Silver. “Vision Correction with Adaptive Spectacles.” In World Blindness and Its Prevention, vol. 6, edited by R. Pararajasegaram and G. N. Rao, 201–8. Hyderabad: International Agency for the Prevention of Blindness, 2001. Alvarez, L. W. “Development of Variable Focus Lenses and a New Refractor.” Journal of the American Optometric Association 49, no. 1 (1978): 24–29. Centre for Vision in the Developing World. http://www.vdwoxford.org/home/. Douali, M. G., and J. D. Silver. “Self-­Optimised Vision Correction with Adaptive Spectacle Lenses in Developing Countries.” Ophthalmic and Physiological Optics 24 (2004): 234–41. He, M., et al. “The Child Self-­Refraction Study Results from Urban Chinese Children in Guangzhou.” Ophthalmology 118, no. 6 (2011): 1162–69. “Lentille à Foyer Variable du Docteur Cusco.” La Nature, no. 568 (1880): 55. A version of this article appeared in English as “A Lens with Variable Focus.” Scientific American, 43, no. 9 (1880), 131. Lohmann, A. W. “A New Class of Varifocal Lenses.” Applied Optics 9 (1970): 1669–71. Mayor, Louise. “A Global Vision for Vision.” Physics World (July 2010): 29–32. This article describes the history and development of the project to make self-­adjustable refraction eyeglasses. The Provision of Spectacles at Low Cost. Geneva: World Health Organisation, 1987. Silver, J.D., et al. “How to Use an Adaptive Optical Approach to Correct Vision Globally.” South African Optometrist 62, no. 3 (2003): 126–31. Smith, K., E. Weissberg, and T. G. Travison. “Alternative Methods of Refraction.” Optometry and Vision Science 87 (2010): 176–82. Wright, B. M. “Variable Focus Spectacles.” Transactions of the Ophthalmological Societies of the United Kingdom 98 (1978): 84–87. Zhang, M., et al. “Self Correction of Refractive Error among Young People in Rural China: Results of Cross Sectional Investigation.” British Medical Journal 343 (2011): 407 (summary); http://www.bmj.com.

226

cha p t er elev e n

New Ideas from High Platforms Multigenerational Creativity at NASA Baruch S. Blumberg

Creativity usually emerges from new ideas, and new ideas often emerge from new observations. Searching in previously unreachable locations increases the possibility of encountering and observing the new. Space is such a location. We can now observe, from high platforms, natural phenomena that were never previously available for study. The spaceships, rockets, and satellites that allow extensive searches into the universe were not available as recently as two decades ago. They are akin to the telescope and the microscope of Galileo Galilei and Antoni van Leeuwenhoek, who saw something previously unseen whenever they pointed their instruments in new directions. Space is a rich source of new observations, new ideas, and accelerated creativity. In this article I survey some of the creative possibilities that have arisen from space research in the recent past and that may arise from it in the immediate future, notably at the National Aeronautics and Space Administration (NASA), where I worked for several years in various fields. Since astrobiology is my primary interest here, I concentrate on that to begin with, before touching on some other areas of space research. Astrobiology is a great producer of new ideas from which many novel hypotheses and models can be formulated. I first came across 227

baruch s. blumb erg

it after many decades working in the field of medicine, when I left research for a while to teach students at Stanford University in 1997– 1998. I was invited to attend an Astrobiology Roadmap Workshop held in mid-­1998 at NASA’s Ames Research Center, at Moffett Field in nearby Mountain View, California. The proceedings were fascinating and encouraged me to learn more about this emerging discipline. NASA had recently established an astrobiology program and had invited several hundred scientists from the space science and general science communities to discuss and formulate a program for astrobiology. The mission statement was “The study of the origin, distribution, evolution and future of life on earth and in the universe”—­no mean program. Astrobiology is concerned with life as a planetary phenomenon—­that is, how biology interacts with celestial objects. It addresses the fundamental questions: “How did life originate?” “Are we alone in the universe?” and “What is the future of humans in space?” Implied in this mission statement and these questions is an additional question: “What is life and how is it characterized?” Allied to this question is the need to understand death, in other words, the answer to the question: “When does life cease and how can its effects be detected and measured in fossil remains and in the influences that life has on its environment that remain after the disappearance of the living material?” These are intriguing questions, of interest not only to scientists but also to philosophers, the religious minded, ethicists, and many others. NASA proposed to study the issues through the scientific process. As Paul Davies wrote in his book on astrobiology, The Fifth Miracle: The Search for the Origin of Life, life has the characteristic (to use philosophical terminology) of “being” and “becoming.” Life exists in a particular form now, yet has the potential, because of the diversity in its offspring, of becoming something related to, but also different from, its current form. A few months later I was asked to cochair another roadmap workshop along with the Nobel laureate Richard Roberts; the title was “Genomic Studies on the International Space Station.” It was an exciting program 228

new ideas from high pl atforms

during which I met more of the NASA staff, both in the workshop and afterward. Soon after this I was asked if I would allow my name to be put forward as the director of the recently established NASA Astrobiology Institute (NAI). This was a surprise given my inexperience in astrobiology. The reason, apparently, was that NASA wanted an experienced scientist to oversee the program’s initiation. After interviews with Daniel Goldin, the then administrator of NASA, I was appointed NAI’s founding director.

The NASA Astrobiology Institute The NAI was (and remains) a “virtual” institute: each of its research teams remained in their home institutions. They were well funded by NASA and expected to take a part in the NAI’s activities through both face-­to-­face collaboration and electronic means. The director had, theoretically, a large measure of control over the grantees’ research, but it was apparent that a top-­down hierarchical model for management was inappropriate for the independent-­minded scientists the field attracted. The general trend in science is to become more and more specialized in a narrow discipline. The NAI was deliberately organized in the opposite direction. Astrobiology included disciplines in which I did not have any formal training: geology, paleontology, oceanography, astronomy, and cosmology, as well as the engineering knowledge needed to understand the technology that is a major part of any space mission. I relied heavily on the NAI’s Executive Council, made up of the principal investigators of each of the eleven teams that we funded. Although they were formally an advisory group, I nearly always took their advice, giving them de facto authority. I understood that my mandate was to establish a basic-­science organization that could discover and understand natural phenomena that related to early life and to life elsewhere. At an introductory address to the members of the institute I said that I did not expect them to do exactly what they said they would do in their applications since, in a fast-­moving field, observations made 229

baruch s. blumb erg

after a grant application had been written could greatly change the path of research. This remark was greeted with cheers. Fortunately, the appeal of astrobiology attracted outstanding NASA professionals to the NAI staff located at our headquarters at Ames Research Center. They enabled the institute not only to operate efficiently but also to innovate. There were major barriers to surmount in order to produce the culture of collaboration that we sought. Collaboration had to bridge different scientific disciplines, institutions in a variety of countries, large geographical distances, and disparate age groups. We developed techniques for realizing our goal of collaboration, which included 1. A modern videoconferencing capability between each of the teams. 2. Frequent face-­to-­face meetings, so that collaborators knew each other personally and could therefore relate better using electronic communication. 3. The funding of field trips that included members of several teams, thus increasing the opportunity for people to learn about their colleagues’ scientific and other interests. 4. A website that would help to bind the participants together and serve as a repository for mutually used data. 5. The funding of research fellows who could migrate from team to team to facilitate communication between the teams. 6. The production of real-­time interactive video lectures and conferences that could include members from many teams. Overall, the management structure was dispersed rather than command-­and-­control. We encouraged the teams to communicate and collaborate directly with each other without the need to go through NASA Central. From the beginning, the NAI placed a strong emphasis on international cooperation. Space exploration has been a remarkably international process. Even during the depths of the Cold War, Soviet and U.S. astronauts, cosmonauts, and space scientists collaborated on projects, 230

new ideas from high pl atforms

as did their governments. The NAI recognized that the search for life in the solar system could not be exclusively a U.S. activity. It is a universal program in which nations who wish to collaborate, and can, should be encouraged to do so. Initially, several countries requested association or affiliation with us, in part to demonstrate to their own governments that they had international recognition. Eventually, this recognition was converted to a federal organization of national astrobiology institutes, which resulted in some rich and effective international programs. To give a few examples, astrobiology includes understanding the origin of elements and chemicals in the early universe: prebiotic chemistry, that is, how simple organic molecules (many of which are found in space) can assemble to form the long-­chain molecules—­proteins, DNA, RNA, long-­chain sugars, glycoproteins, and fats—­that are essential for life as we understand it. Astrobiology also includes the study of locations on the contemporary Earth that have similarities to the early Earth and, hence, the early Mars. (Earth and Mars had many similarities in their early days, when their environments were much harsher than at present.) Locations include geothermal sites, the sites under the oceans where “black smokers” form as one tectonic plate subducts under another tectonic plate, the deep-­ocean sediments and the subocean floor, and sites of extreme temperature and pH. These are exciting sites for the study of geology and geochemistry and also for the bacteria, archaea, viruses, and multicellular organisms that flourish in what humans consider to be harsh environments. I participated in field trips to several of these locations—­for instance, southern California’s Death Valley, an area of high temperature, high salinity, and scarce rainfall; Yellowstone National Park, the largest geothermal site in North America; Iron Mountain Mine in northern California, where the outflow has a pH approaching zero; the Haughton Impact Crater on Devon Island in the Canadian Arctic Archipelago in the territory of Nunavut; the salt ponds in Guerraro Negro, Baja California, which are home to large bio-­mats that are common in contemporary extreme environments and were also common in the Cretaceous period; and 231

baruch s. blumb erg

Mono Lake below the eastern slope of the Sierra Mountains of California, where geothermal sites with low pH and high mineral concentrations are found. These places made for very interesting adventures in the company of a multidisciplinary team of scientists. I formed the NAI Virus Focus Group to study viruses and phages in these environments, which hosted workshops and field trips. During the time I directed the NAI, from 1999 to 2002, I also served (for a year) as senior advisor for biology to the administrator of NASA at its headquarters in Washington, DC, with a focus on life sciences. The tagline for the program was “Life beyond its planet of origin”—­in particular, the health of humans in low Earth orbit, on the Moon, and eventually on excursions to Mars. Working in the shadow of Capitol Hill where the decisions were made on the priorities and funding of NASA was strange, stimulating, and frustrating, often at the same time. The administrator, Daniel Goldin, whom I theoretically advised, was a dynamic and visionary leader with a forceful management style. More than once, I flew with him on NASA One to the launches of the Space Shuttle at the Kennedy Space Center. We landed on the rarely used Skid Strip and, with a police escort, proceeded to the Saturn building and then to the viewing platform for the mind-­blowing launch. It was a big change from hanging out in a biochemistry laboratory. I enjoyed my association with NASA over the course of more than five years. I had to learn a whole science that, on most days, found me at the edge of my intellectual capabilities, a happy challenge in my mid-­seventies. The people I met were different from the medical and biological scientists of my previous scientific life: aviators, astronauts, astronomers, cosmologists, geologists, oceanographers, paleontologists, and senior government and political figures. I enjoyed the outdoors life of California with wonderful walks in the Santa Cruz Mountains, backpacking trips into the remote Trinity Mountains near Oregon, miles of walking on lonely beaches with the ocean on one side and sheer cliffs on the other. Stanford University and the NASA Ames Research Center were in the middle of Silicon Valley; the enthusiasm, 232

new ideas from high pl atforms

entrepreneurial spirit, and optimism of the place were sources of stimulation and excitement.

Space and Creativity I now come to a few of the wide range of recent, current, and future space-­related missions of exploration and discovery; the contributions (to both astrobiology and other fields) that these discoveries have made to our Earth-­bound life; and the multigenerational nature of long-­term space research. NASA and other national space agencies have many such missions under way, and others are projected. Recently, nongovernmental launches have carried instruments with space-­based science capability, and there will likely be many more.   The Kepler mission, a NASA Discovery mission, is designed to survey our region of the Milky Way galaxy to detect and characterize planets, including potentially Earth-­size planets, in or near the habitable zone orbiting suns other than our own Sun. (The habitable zone is the planetary orbit an appropriate distance from its sun where life as we currently understand it could survive and flourish.) The purpose of the mission is to seek a “Pale Blue Dot” (PBD). This term was used by the visionary Cornell University astronomer Carl Sagan for the iconic photograph taken of Earth by Voyager 1 on February 14, 1990, on its way out of our solar system and into deep space. The PBD has become a metaphor for the search for life outside our solar system. The Kepler mission’s instruments have detected evidence for many extrasolar planets, some within the habitable zone around their distant suns. A decade ago only a handful of extrasolar planets were known; now, using both terrestrial and space telescopes, over two thousand have been detected. A very large percentage of these and other so-­far-­undetected solar systems will have planets capable of harboring life. These new observations greatly increase the probability of life elsewhere, but it is important to recognize that, so far, there is still no direct evidence of life anywhere but on Earth. 233

baruch s. blumb erg

Among the missions surveying Earth, NOAA-­N Prime is a polar-­ orbiting satellite developed by the NASA/Goddard Spaceflight Center for the National Oceanic and Atmospheric Administration (NOAA). NOAA uses two satellites, a morning and afternoon satellite, to ensure that every part of the Earth is observed at least twice every twelve hours. NOAA-­N Prime collects information about Earth’s atmosphere and environment to improve weather prediction and climate research across the globe. NASA and NOAA are actively engaged in a cooperative program, the multimission Geostationary Operational Environmental Satellite (GOES) series N-­P. This series will be a vital future contributor to weather, solar, and space operations and science. The Wide-­field Infrared Survey Explorer (WISE) surveyed the entire sky in the mid-­infrared with far greater sensitivity than any previous mission or program, from late 2009 until early 2011. During October 2010, for example, WISE detected more than 33,500 new asteroids and comets, and over 154,000 solar system objects. The International Space Weather Initiative (ISWI) observes our Sun and is part of the Living with a Star program of NASA. The name of the program itself is an indicator of the imaginative possibilities of future missions. Looking at the Moon, the Lunar Reconnaissance Orbiter (LRO) and the Lunar Crater Observing and Sensing Satellite (LCROSS) mission were co-­manifested in a 2009 launch. The LRO’s mission objective is to photograph at high precision the surface of the Moon, find safe Moon landing sites, locate potential in situ resources, characterize the radiation environment, and test new technology. The LCROSS was a spectacular mission in which the end stage of the launch vehicle was impacted on October 9, 2009, into the bottom of a deep polar crater that has never seen sunlight, to determine if water could be detected in the vaporized ejecta. I was present at the control facility for the final impact and can testify to the excitement of this highly successful mission that confirmed the presence of water in the crater. As for Mars exploration, the remarkable Mars Exploration Rovers, named Spirit and Opportunity, with original engineering specifications 234

new ideas from high pl atforms

for a ninety-­day mission, survived for more than six years, despite the wearing down of their gyros, damaged wheels, severe dust storms, and other Martian hazards. Together, the rovers explored kilometers of Martian surface, analyzed rocks with their grinders and confocal microscopes, and provided detailed photographs of the geology inside large and small impact craters where they could see what appeared to be sedimentary layers deposited by water. A major mission for the future is to determine if water once flowed on the surface, a condition that could have allowed life to exist. The Mars Science Laboratory—­ which landed on Mars in August 2012—­is a rover intended to assess whether Mars ever was, or is still today, an environment able to support microbial life, and to determine the planet’s habitability. This mission is the largest yet in NASA’s Mars astrobiology program, which has multiple missions projected over the course of the next few decades. It will eventually lead to a return mission to Earth—­that is, the retrieval of material from Mars for detailed analysis that cannot be executed robotically in situ. Recent deep-­space missions have included Galileo, the unmanned NASA spacecraft launched in 1989, which reached Jupiter in 1995, and Cassini-­Huygens, the NASA and European Space Agency’s planetary explorer launched in 1997, which reached Saturn in 2004 and is still in operation. Cassini went into orbit around Saturn and launched its Huygens probe through the atmosphere of Saturn’s moon Titan, which executed a soft landing on Titan. It provided extraordinary measurements of this unique moon, consisting of, primarily, gaseous, liquid, and frozen methane. Cassini also made several passes close to Saturn’s moon Enceladus, which is spewing a plume of material containing organic molecules, leading to speculation that some form of primitive life, or perhaps a prebiotic organic soup, may lie beneath this moon’s icy crust. As an example of the multiple tasks possible for an exploring spaceship, I have adapted below a summary report on the Galileo mission from the NASA website. It illustrates the richness of the discoveries, 235

baruch s. blumb erg

and hence the new ideas and the opportunities for creativity, possible in an environment where little is previously known. These include discoveries not anticipated prior to launch. Here is my summary: Galileo was launched from the cargo bay of the Space Shuttle Atlantis in 1989. The exciting list of discoveries started even before Galileo got a glimpse of Jupiter. As it crossed the asteroid belt in October 1991, Galileo snapped images of Gaspra, returning the first ever close-­up image of an asteroid. Less than a year later, the spacecraft got up close to yet another asteroid, Ida, revealing it had its own little “moon,” Dactyl, the first known moon of an asteroid. In 1994 the spacecraft made the only direct observation of a comet impacting a planet—­comet Shoemaker-­Levy 9’s collision with Jupiter. The descent probe made the first in-­place studies of the planet’s clouds and winds, and it furthered scientists’ understanding of how Jupiter evolved. The probe also made composition measurements designed to assess the degree of evolution of Jupiter compared to the Sun. Galileo made the first observation of ammonia clouds in another planet’s atmosphere. It also observed numerous large thunderstorms on Jupiter many times larger than those on Earth, with lightning strikes up to 1,000 times more powerful than on Earth. It was the first spacecraft to dwell in a giant planet’s magnetosphere long enough to identify its global structure and to investigate the dynamics of Jupiter’s magnetic field. Galileo determined that Jupiter’s ring system is formed by dust kicked up as interplanetary meteoroids smash into the planet’s four small inner moons. Galileo data showed that Jupiter’s outermost ring is actually two rings, one embedded within the other. Galileo extensively investigated the geologic diversity of Jupiter’s four largest moons: Ganymede, Callisto, Io, and Europa. 236

new ideas from high pl atforms

Galileo found that Io’s extensive volcanic activity is 100 times greater than that found on Earth. The moon Europa, Galileo unveiled, could be hiding a salty ocean up to 100 kilometers (62 miles) deep underneath its frozen surface containing about twice as much water as all the Earth’s oceans. Data also showed Ganymede and Callisto may have a liquid-­saltwater layer. The biggest discovery surrounding Ganymede was the presence of a magnetic field. No other moon of any planet is known to have one. The prime mission ended after two years of orbiting Jupiter. NASA extended the mission three times to continue taking advantage of Galileo’s unique capabilities for accomplishing valuable science. The mission was possible because it drew its power from two long-­lasting radioisotope thermoelectric generators provided by the Department of Energy. The mission ended 17 September 2003 when the spaceship was purposely crash-­landed into its destination planet Jupiter. Current deep-­space missions include the NASA Great Observatories: the Hubble space-­borne visible light telescope; Spitzer, the most sensitive infrared space observatory ever launched; the Chandra X-­Ray Observatory; and the Far Ultraviolet Spectroscopic Explorer mission. A replacement for Hubble, the James Webb Space Telescope, has been planned since 1996 and is scheduled for launch, perhaps in 2018. Many space missions take a long time to plan, gain approval, design, build, launch, and arrive at their final destinations. Perhaps the longest period of all is the time required to analyze and interpret the vast amounts of data a mission collects. For example, New Horizons, the NASA mission to the dwarf planet Pluto in the distant Kuiper Belt, was launched in 2006 and will reach Pluto only in 2015. The planning for the mission probably began ten or more years before its launch. The data analysis will take at least a dozen years from data return, if previous projects are any comparison. Thus, a project initiated by an investigator 237

baruch s. blumb erg

in, say, 1995, may not have its data analyzed until twenty-­five years later. Hence, the scientists who designed the mission and who generated the hypotheses to be tested may not themselves see the answers to the questions spurred by their imagination and vision. Their children and grandchildren, or someone else’s children and grandchildren, will answer the questions they asked. Space science is multigenerational. Of course, scientists generally acknowledge that their accomplishments are based on the work of their predecessors. As Newton famously put it, “If I have seen a little further it is by standing on the shoulders of giants.” But space-­related science is different from previous science in that there is no choice but to leave one’s own hypotheses to be tested and one’s research to be completed by others who may not yet be born. Furthermore, like the NAI discussed above, space programs are international or even supranational. Space exploration may be one of the most potent and purposeful means for nations to work together in peace. The race to take humans to the Moon was seen as a U.S.-­Soviet contest. Yet, on July 17, 1975, while the Soviet Soyuz spacecraft and the last of the U.S. Apollo spacecraft were linked in space, the ports between the attached spacecraft were opened, Soviets and Americans shook hands, and then worked together for two days. Today, the Chinese, Indian, Russian, and European space agencies, and others are active in space; some of them have announced their intention to go to the Moon and establish a continuously occupied research station. This can best be done by international cooperation, and discussions to that end have started. The International Space Station (ISS) is a striking example of international cooperation. Russia has provided large portions of the craft, and many other nations are contributing partners to this incredible piece of engineering, the most complicated thing that humans have ever made in space. In the next few years the ISS will depend on the Russian Soyuz vehicle for human transport until the United States has completed the replacement for the now-­ decommissioned Space Shuttles. It is not widely recognized how many engineering and other appli238

new ideas from high pl atforms

cations developed for space exploration are now used extensively on Earth. The space program is responsible for satellite communication and for the Global Positioning System (GPS) and all its imaginative applications. The programs and sensing devices used in intensive care units were adapted from the monitoring systems required for the Mercury program astronauts and others orbiting the Earth, and going to the Moon and the ISS. The software used in MRI and other medical imaging devices relies heavily on the technology developed by NASA for analyzing Landsat images (pictures of Earth taken from orbit). Microelectromechanical systems, now a multibillion-­dollar industry, were in part developed for experimental studies on space-­borne rat cardiac systems, and for providing a chromatograph for use on Mars. On Earth they are used in cameras and cellphones, to detect deacceleration and deploy automobile airbags, to sense low air pressure to automatically inflate truck tires, and in many other systems. Some recent applications from space engineering and research include improved lithium batteries, which are extensively used in space, and the improvement of the capabilities of terrestrial electric vehicles. The material designed for spacesuits helps to protect divers in hostile marine environments; space-­age swimsuits decrease friction and drag and are now used widely by competitive swimmers. Experiments into regenerative ecosystems to be used on board space vehicles evolved into one of the most widespread NASA spin-­offs of all time: a method for manufacturing an algae-­based food supplement that provides the nutrients previously only available in breast milk. Algae are also now widely used as substitutes for fossil-­fuel-­based gasoline and other fuels. An advanced material developed by NASA is now being used on thin metal wires connected to implantable cardiac resynchronization therapy devices for patients experiencing heart failure. Cameras used on Mars landers and rovers to make panoramic photographs for navigation and observation have been adapted by commercial and amateur photographers. New devices, engineering, digital software, materials, and ideas are 239

baruch s. blumb erg

needed to support the extraordinary demands of the space environment and to do things in it. Although not ordinarily required for terrestrial tasks, once they have been invented and used in space, terrestrial applications for such devices, products, and concepts are quickly found and often commercially exploited. Thus, the exploration of space has enhanced, and will continue to enhance, science and technology, the economy, and life on Earth.

Bibliography Blumberg, Baruch S. “The NASA Astrobiology Institute: Early History and Organization.” Astrobiology 3, no. 3 (2003): 463–70. Davies, Paul. The Fifth Miracle: The Search for the Origin of Life. London: Penguin, 1998. NASA website, http://www.nasa.gov/.

240

Afterword From Michael Faraday to Steve Jobs Freeman Dyson

When I was growing up as a child in England, the grown-­ups told us that different countries had different talents. The Germans were best at music, the French were best at painting, the Americans were best at movies, and we were best at science. That was a fact of life that we took for granted. Our educational system did not push us into science. On the contrary, our schools had a curriculum heavily weighted toward classics, which meant intensive study of ancient Latin and Greek. The only science taught in my elementary school was physiography, which would now be called nature study, taught by a sweet old man who had grown up in the nineteenth century and was untouched by the twentieth. Physiography was the science that Darwin learned as a schoolboy, before he began to think about evolution. It was a mixture of geography and geology and ecology. It was a good way to get kids interested in science without pushing. We did not do tests in physiography. We enjoyed it because it was easier than Latin and Greek. The teacher had a sense of humor, and everyone called him the Blip. Apart from physiography, we were left to discover science for ourselves. Bright kids were attracted to science because we knew we were good at it. We had some excellent role models. If Faraday and Darwin and Dirac could do it, so could we. We found out what science was about by reading popular books. There were plenty of good books written by real scientists such as astronomer Arthur Eddington, physicist

241

freeman dyson

James Jeans, and biologist John Haldane. We also read Herbert (H. G.) Wells, who wrote stories and novels but had been trained as a biologist and had dark visions of the future. Our approach to science had nothing to do with organization. We were free spirits pursuing a variety of dreams. I doubt whether the thought ever crossed our minds that we needed organization in order to be creative. This book has a very different mind-­set. It contains eleven lively and illuminating descriptions of people and events in the history of modern science. But the emphasis is on organizations. The main question that the authors are asking is, what were the organizations that made it possible for creative people to do extraordinary things? The eleven case histories do not give a clear answer to that question. The authors disagree about the answer, but they all agree that the question is meaningful. They take for granted the notion that organizations of some kind are better than none at all. I reject that assumption because I grew up in a different place and time. Faraday and Darwin and Dirac did not entirely escape organization, but they did very well without it. At least for some people at some times, organization is unnecessary and may be harmful. In a wide range of circumstances, organization and creativity may be incompatible. There are four distinct kinds of scientific creativity. These exist because scientific advances can be classified either by their ends or by their means. Ends and means are independent variables. The ends of a scientific activity may be improved understanding or improved technology. Pure science aims at better understanding; applied science aims at better technology. Independently of the aims, the means of a scientific advance may be new ideas or new tools. New ideas emerge from pure science; new tools emerge from applied science. So the four kinds of creativity are the following. Type 1: Idea-­driven scientific revolutions, such as quantum mechanics and the theory of black holes. Type 2: Tool-­driven scientific revolutions, such as molecular biology and radio-­astronomy. 242

afterword

Type 3: Idea-­driven technology, such as digital computers and medical tomography. Type 4: Tool-­driven technology, such as integrated circuits and orbiting spacecraft. These four lines of work are pursued by different kinds of people and require different kinds of organization. The continued growth of science requires all of them to interact. The output of types 1 and 2 is the input for types 1 and 3. The output of types 3 and 4 is the input for types 2 and 4. There is feedback in both directions, from ideas to tools and from tools to ideas. But history shows that the four types of creativity are best pursued separately. Environments that foster excellence in one line of work do not do so well with others. A healthy growth of science requires a diversity of environments. Three instances of type 1 creativity, advances in pure science driven by ideas, are described in this book. Gino Segrè describes the quantum mechanics revolution at Niels Bohr’s institute in Copenhagen and the nuclear physics revolution at Enrico Fermi’s institute in Rome. Andrew Robinson describes the deciphering of the Linear B tablets by Michael Ventris, working alone without an institute to support him. These three instances are strikingly similar to the three heroes of my childhood: Faraday working at the Royal Institution in London, Dirac at Cambridge University, and Darwin working at home like Ventris with the help of a comfortable private income. Ventris and Darwin were both copious letter writers. They worked alone, but kept in touch with friends and colleagues by writing long and frequent letters. These examples demonstrate that no organization is required for type 1 excellence. For individuals who need personal contact with collaborators, attachment to an institute is useful, but the institute should make no organizational demands. The institute should be a motel with stipends, a place where young people can stay for a while and be paid enough to live on, without duties and without responsibilities. The Institute for Advanced Study in Princeton tries to fulfill this role. When Robert Oppenheimer was director of the Princeton institute, he liked to talk 243

freeman dyson

about a conversation that he overheard between two small boys walking outside the open window of his office. “What is that?” said one small boy. “That’s the Institute,” said the other. “Is it a church?” said the first. “No, it’s a place to eat,” said the second. The institute provides its members with a place to sleep, a place to work, a place to eat, and day care for the kids. Any organization going further than that is likely to be counterproductive. Three chapters are concerned with type 2 creativity, pure science driven by new tools. Philip Anderson describes Bell Laboratories in their days of glory, Andrew Robinson describes the discovery of the double-­helix structure of DNA, and Baruch Blumberg describes the exploration of the universe by NASA space-­science missions. Since new tools are expensive, these activities were inevitably entangled with financial and administrative constraints. Some budgetary organization was unavoidable. The essential requirement for maintaining creativity was to make the organization invisible to the scientists. The art of the administrator was to protect the scientists from the organization. The DNA story provides the most illuminating account of how this could be done well or badly. There were two institutions involved, the Cavendish Laboratory at Cambridge with Lawrence Bragg in charge, and the Medical Research Council laboratory at Kings College in London with John Randall in charge. Maurice Wilkins and Rosalind Franklin were experimenters working at Kings. Francis Crick and James Watson were theorists working at Cambridge. Wilkins and Franklin used the new tool of X-­ray crystallography to make pictures of the scattering of X-­rays by DNA. Crick and Watson understood the pictures and deduced the double-­helix structure. Wilkins and Franklin could probably have deduced the structure themselves but were scooped by Crick and Watson. How did this happen? It happened because Randall was a bad administrator and Bragg was a good one. At Kings, the organization was formal, relations between Wilkins and Franklin were unfriendly, and communication was difficult. At the Cavendish, the organization was informal, Crick and Watson were enjoying them244

afterword

selves uproariously, and communication was no problem. Wilkins and Franklin had the pictures first, and they were scooped because they were organized to continue taking pictures before they tried to understand them. They thought they would have plenty of time to look at the pictures later. They did not realize that the Cavendish people were playing the game with different rules. Bell Laboratories were like the Cavendish Laboratory, with scientists protected from organization while they invented transistors and lasers and did pioneering experiments in condensed-­matter physics. The same protection existed to a more limited extent in the NASA space-­science program. Type 3 creativity, the use of new ideas to develop new technology, appears in two chapters of the book, the account by George Dyson of the computer project at the Institute for Advanced Study, and the account by Timothy Bresnahan of the rise of the personal computer industry in Silicon Valley. In the institute computer project, the new idea was software but the output of the project was hardware. The purpose of the project was to translate the abstract logic of any computation into the language of software, and then build a machine out of hardware that could perform the computation specified by the software. The project, like other engineering projects, ran into many delays and difficulties. To overcome a succession of crises, it required careful organization. John von Neumann, the leader of the project, understood that the central problem to be solved was to manufacture a reliable machine with a large number of unreliable components. An administrator must solve this same problem when building a reliable workforce out of a large number of unreliable humans. Von Neumann solved the mechanical and human problems in the same way. In the machine and in the human workforce, he kept the organization as loose and flexible as possible. Von Neumann’s revolutionary idea was the union of hardware and software to create machines that could reliably perform complicated tasks. He also understood that there is a human analog to the union of hardware and software, and he made full use of the human analog. The 245

freeman dyson

human analog is husbands and wives. The division of labor between husbands and wives has always been a contentious problem. From a strictly biological point of view, the male sperm is mostly software, carrying genetic instructions, while the female egg is mostly hardware, carrying the chemical tools for making a baby. From a social point of view the roles are reversed. When von Neumann started his project in 1946, husbands mostly came to work on hardware and wives on software. He welcomed wives into his organization. His own wife, Klari, had a responsible position as one of the first computer programmers. Heading the workforce was a remarkable husband-­and-­wife team, Julian and Mary Bigelow. Julian was the chief engineer and Mary was a clinical psychologist. The employees in the institute housing community used to say, if you were in trouble with your car, Julian could fix it, and if you were in trouble with your soul, Mary could fix it. Julian and Mary together kept the human organization from falling apart. Timothy Bresnahan’s chapter, “Entrepreneurial Creativity,” tells a different story, concerned with marketing rather than with technology. The rise of the personal computer industry was a triumph of marketing. Von Neumann was a great success as a technologist but a total failure as a marketer. He envisaged his computer as a technical tool for experts doing scientific calculations. He did not intend to turn his computer project into a business. The experts who used his computer were meteorologists trying to predict the weather and nuclear weaponeers trying to design hydrogen bombs. He never imagined the computer as a desktop or a laptop or a toy that would occupy the waking hours of men, women, and children all over the world. Silicon Valley was the place where computers changed from being large and formidable to being small and user-­friendly. To create the new mass-­market economy of Silicon Valley, a new kind of organization, the start-­up company, emerged. The start-­up company was typically begun by a small group of technically imaginative scientists and venture capitalists. If it was successful in opening a new market, the company would rapidly grow into a big operation with thousands of employees. In the start-­up phase 246

afterword

the organization was minimal, and in the growth phase the organization was kept as informal as possible. Google is the classic example of a start-­up company that grew into a monster. The organization of the monster is still almost invisible to most of the employees. Type 4 creativity, the development of useful technology driven by new tools, is the main subject of the book. It fills five chapters: Rogers Hollingsworth and David Gear on “The Rise and Decline of Hegemonic Systems of Scientific Creativity,” the two David Billingtons on “The Sources of Modern Engineering Innovation,” Susan Hackwood on “Technically Creative Environments,” Tony and Jonathan Hey on “Creative Activity as Technology Turns into Applications,” and Joshua Silver on “The Creation of Self-­Adjustable Eyeglasses.” As the titles of these chapters indicate, the emphasis is on organization rather than ideas. The authors start from the premise that an organization is required for type 4 creativity. They are concerned with assessing the existing organizations and inquiring which works best. Absent from the book is any discussion of the ideas of Peter Diamandis, who promotes and practices a different strategy for organizing creativity. Diamandis is chairman of the X-­Prize Foundation. He promotes the idea that large cash prizes are far more cost-­effective than either commercial investment or government subsidy for motivating technical innovations. He observes that such prizes have been outstandingly successful in the past, typically attracting more than ten times the amount of the prize in money spent by competing teams. Only the winning team is paid, but the losers are also helping to develop the new technology. The classic example of such a prize was offered by Raymond Orteig for the first nonstop flight from New York to Paris. It was won by Charles Lindbergh in 1927, and stimulated an investment in long-­range aviation far greater than the twenty-­five thousand dollars paid to Lindbergh. Diamandis’s X-­Prize Foundation is administering similar prizes today, mostly for private ventures in space. The Defense Advanced Research Projects Agency (DARPA) recently gave prizes for driverless vehicles racing against each other over a prescribed off-­road 247

freeman dyson

course. Diamandis believes that such prizes could play a greater role in the future as the driving force of innovation. One of the virtues of this strategy is that it requires almost no permanent organization. All that the sponsor of the prize has to do is to choose the objective and specify clearly the conditions that the winner has to satisfy. The competitor’s job is then to create the necessary organization, and the organization can disappear as soon as the competition is over. I wish that Diamandis—­who participated in the 2008 symposium at the Institute for Advanced Study—­had been persuaded to write a chapter for this book. An expanded version of his paper, “Using Incentive Prizes to Drive Creativity, Innovation and Breakthroughs,” originally written for the X-­Prize Foundation, would have been a breath of fresh air, blowing in a direction sharply different from the other chapters. He would have given a better balance to the book. He shows us another way to stimulate type 4 creativity and avoid stifling it with organization. What lessons does this book teach us for the future? The future of scientific ideas is unpredictable. The future of tools and technologies may be partially predictable. I venture to make a prediction about an example of type 3 creativity that might happen during the present century, a prediction of a technological revolution that might grow out of a scientific discovery. I am guessing that some time in the present century, the science of neurology will make massive progress, so that we will understand in detail how the human brain works. I am guessing that the brain will turn out to be an analog computer rather than a digital computer. Analog means working with continuous variables; digital means working with zeros and ones. Historically, the analog machine was a slide-­rule and the digital machine was an abacus. I consider it likely that the brain is analog, because the two tasks that the brain performs spectacularly well are the rapid recall by associative memory of visual images in space and of auditory patterns in time. The comparison of pictures and of speech patterns is done so quickly and effortlessly that we can only guess how the process is carried out by our neurons. My guess is that the process is analog, because it seems to 248

afterword

grasp the overall shape of a pattern rather than using a point-­by-­point calculation. Supposing that my guesses of scientific discovery come true, then the way will be open for a radical revolution in the technology of computing. The existing technology is entirely digital. The old dream of artificial intelligence, promoted for fifty years by computer experts and science-­fiction writers, was the construction of a computer that could think like a human brain. Perhaps the dream failed because the computers were digital and the brain is analog. Perhaps the dream will come true when we rebuild computer technology using analog machines. In that case, the creative moment will come when we apply a detailed understanding of the brain to the design of radically new computer hardware and software. Later in the century, the new analog computer industry will have profound effects on human society, as creative and disruptive as the effects of digital technology today. What should we do to improve the chances that our society will remain creative in future centuries? It is no longer true, as it was when I was young, that the United States is creative only in making movies. In the last fifty years, the United States has become a leading center of creativity in pure science and also in technology. But our educators think we ought to be doing better. Great efforts are being made to push our schoolchildren into creativity by loading them with a heavy curriculum of science and mathematics. I believe these efforts are pushing in the wrong direction. The long hours in the classroom are likely to turn off creative spirits rather than to turn them on. Creative role models for these kids already exist. Their names are not Faraday and Darwin and Dirac but Gates and Jobs and Zuckerberg. This is the land where creativity means dropping out of college and founding a start-­up company that changes the world.

249

Contributors

Andrew Robinson (editor), a King’s Scholar of Eton College and a visiting fellow of Wolfson College, Cambridge, from 2006 to 2010, was literary editor of The Times Higher Education Supplement in London from 1994 to 2006. He is the author of some twenty-­five books in the arts and sciences published by trade and academic publishers, which have been translated into ten languages. They include The Man Who Deciphered Linear B: The Story of Michael Ventris (Thames & Hudson, 2002); Sudden Genius? The Gradual Path to Creative Breakthroughs, a study of exceptional creativity in the arts and sciences (Oxford University Press, 2010); and Genius: A Very Short Introduction (Oxford University Press, 2011); for Sudden Genius? he received a research grant from the John Templeton Foundation. His latest books are Cracking the Egyptian Code: The Revolutionary Life of Jean-­François Champollion (Thames & Hudson / Oxford University Press, 2012), and an edited collection, The Scientists: An Epic of Discovery (Thames & Hudson, 2012), with contributions from scientists, historians of science, and science writers. He reviews books for The Lancet, Nature, New Scientist, and Science. Philip W. Anderson is Joseph Henry Professor of Physics Emeritus at Princeton University. He shared the 1977 Nobel Prize in physics for research undertaken at Bell Laboratories. His latest book is the collection More and Different: Notes from a Thoughtful Curmudgeon (World Scientific, 2011). In 2006 he was calculated to have the world’s highest

251

c ontribu tors

“creativity index” among physicists, based on professional citation of his publications. David P. Billington is Gordon Y. S. Wu Professor of Engineering Emeritus at Princeton University. His many books include Power, Speed and Form: Engineers and the Making of the Twentieth Century (Princeton University Press, 2006), written with David P. Billington Jr. David P. Billington Jr. is an independent scholar who writes on the history of modern engineering innovation. Baruch S. Blumberg, who died in 2011, shared the 1976 Nobel Prize in physiology or medicine for the discovery of the Hepatitis B virus and the invention of the vaccine that prevents Hepatitis B infection. A Distinguished Scientist at the Fox Chase Cancer Center in Philadelphia, and University Professor of Medicine and Anthropology at the University of Pennsylvania, he was the founding director of the NASA Astrobiology Institute at the Ames Research Center in Moffett Field, California. Blumberg was the author of some 450 scientific papers, his books include Hepatitis B: The Hunt for a Killer (Princeton University Press, 2002). Timothy F. Bresnahan is Landau Professor in Technology and the Economy at Stanford University. His books include Building High-­ Tech Clusters: Silicon Valley and Beyond (Cambridge University Press, 2004), edited with Alfonso Gambardella. Freeman Dyson is professor of physics emeritus at the Institute for Advanced Study in Princeton, and is well known for his contribution to quantum electrodynamics. In 2000 he was awarded the Templeton Prize. A celebrated writer on science for journals, magazines, and newspapers, he has written many books including Disturbing the Universe (Boosey and Hawkes, 1979), Infinite in All Directions (HarperCol252

c ontribu tors

lins, 1988), Imagined Worlds (Harvard University Press, 1997), and The Scientist as Rebel (New York Review of Books, 2006). George Dyson is a historian of technology whose publications include Baidarka (1986) on the development (and redevelopment) of the Aleut kayak, Darwin among the Machines (Penguin, 1997) on the evolution of digital computing and telecommunications, and Project Orion (Henry Holt, 2002) on a path not taken into space. His latest book is Turing’s Cathedral: The Origins of the Digital Universe (Pantheon, 2012). David M. Gear is a research specialist in history and the social sciences at the University of Wisconsin, Madison. He conducts research on major discoveries, scientists, research organizations, and creativity in science. Susan Hackwood was a department head at Bell Laboratories before moving to California, where she is a professor of electrical engineering at the University of California, Riverside. She is also executive director of the California Council on Science and Technology, a nonprofit corporation sponsored by California’s major academic institutions.  Jonathan Hey researched the traits of successful creative teams and interdisciplinary research for a PhD dissertation at the University of California, Berkeley. He has worked on both the people and technical sides of innovation in the United States and Europe and now works as a user experience designer in London.  Tony (A. J. G.) Hey received his DPhil in theoretical physics from Oxford University in 1970 before pursuing research as a postdoctoral fellow with the Nobel laureates Murray Gell-­Mann and Richard Feynman at the California Institute of Technology. Formerly professor of computation at Southampton University, he is now a vice president at Microsoft Research, where he is responsible for building partner253

c ontribu tors

ships with the academic community. His latest book is The Computing Universe: How Computer Science Is Changing the World (Cambridge University Press, 2013), written with Juri Papay. J. Rogers Hollingsworth is professor of history and sociology emeritus at the University of Wisconsin, Madison, and a senior scholar at the Kauffman Foundation. Since 2002 he has been a visiting scholar at the University of California, San Diego, in an institute of the physics department. He has published extensively on the relationship between innovation and institutions in the United States and Europe. His most recent book (with Ellen Jane Hollingsworth) is Major Discoveries, Creativity, and the Dynamics of Science (Edition echoraum, 2011). Gino Segrè is professor of physics and astronomy emeritus at the University of Pennsylvania. He is the author of A Matter of Degrees: What Temperature Reveals about the Past and Future of Our Species, Planet and Universe (Viking, 2002); Faust in Copenhagen: A Struggle for the Soul of Physics (Viking, 2007), an account of a famous conference of physicists at Niels Bohr’s Institute for Theoretical Physics in 1932; and Ordinary Geniuses: Max Delbrück, George Gamow, and the Origins of Genomics and Big Bang Cosmology (Viking, 2011). He is a nephew of the physics Nobel Prize–winner Emilio Segrè, who worked with Enrico Fermi. Joshua Silver is an experimental atomic physicist from Oxford University, where his positions have included professor of physics and fellow and tutor in physics at New College, Oxford.  In the early 1980s he became interested in adaptive optics and vision. He invented a new technology that has the potential to bring inexpensive eyeglasses to billions of people in the developing world who currently have little or no access to eye-­care professionals.

254

Index

Adspecs. See self-adjustable eyeglasses Afenyo, George, 218, 220 Altair computer, 180–81 alternating current, 127 Alvarez, Luis, 4–5, 216 Alvarez-Lohmann lens, 216 America big science in, 40–41 brain gain of, 148–49 future scenarios in, 46–48 H1B visas and, 150 hegemony of, 9–10, 25–26, 27, 39–42 higher education in, 86–87 industrial revolution in, 124, 166 in post-W.W. II era, 9–10, 25–26, 27, 39–42 scientific dominance of, 9–10 technically creative individuals in, 149–50 Anderson, Philip W., 7–8, 15, 244 assembly line, 127–28 astrobiology, 227, 228, 231, 234–35 atom, 55, 59–60 authoritarianism, 31–32 automobile. See also Model T basic science and, 127 engineering and, 128 Ford and, 127–28 Aydelotte, Frank, 91

Bell Laboratories (Bell Labs) biophysics department at, 79 creative environment rules at, 158 creative research managers at, 152 entrepreneurial skills at, 18 exploitation of discoveries at, 80 hiring practices at, 71, 73 indoctrination at, 74 inventions of, 71, 79–80 Loyalty Oath at, 76, 78 management at, 8, 71–72, 75–76, 78, 80 mathematical advances at, 71 nuclear reactor design at, 72 radar and, 72–73 research justification at, 74 sabbaticals at, 77, 78 semiconductor program at, 72, 73, 76, 77 solid-state physics at, 72–73, 74, 75–76 spectroscopy at, 73 success factors of, 7–8 theorists at, 78 travel and, 77 “young turks” at, 72 “younger turks” at, 77 Bennett, Emmett, Jr., 109, 110, 111 Berners-Lee, Tim, 196 big science American emergence of, 40–41 in small organizations, 47–48 Billington, David P., 16, 247 Billington, David. P., Jr., 16, 247 Blegen, Carl, 107–8 Blumberg, Baruch S., v, 6, 9, 21, 244 Bohr, Niels, 32 atomic model of, 55 Copenhagen Institute for Theoretical Physics and, 9, 10, 56

Bacon, Francis, 37 Bamberger, Louis, 86, 87 Bardeen, John, 8, 23n8, 76, 132–33 basic science automobile and, 127 Bush and, 123–24 transistor and, 132–33 Bell, Alexander Graham, 5

255

index Copenhagen Spirit created by, 56–57 fundraising and, 66–68 on mentoring, 57–58 professorship of, 55–56 students mentored by, 58 as theorist, 14 Bohr Institute. See Copenhagen Institute for Theoretical Physics Bose, S. N., 10 bottom-of-the-learning-curve-pricing alternative to, 176 as entrepreneurial innovation, 176–77, 178 invention and, 176–78, 180 product pricing and, 175–77 sales volume and, 176–77 brain gain America and, 148–49 as brain mobility, 147–48 historical movement of, 147–48 U. S. immigration visas and, 148–49 Bragg, Lawrence, 37, 38, 47, 115, 244 Brattain, Walter, 23n8, 132–33 breakthroughs American industrial revolution and, 124, 166 creativity and, 206–7 examples of, 192–93 factors in, 42, 43 individual’s requirements for, 192, 206 Bresnahan, Timothy, 18, 245, 246 Britain hegemony of, 9, 11–12, 25–26, 27, 35–39 Nobel prizes in, 39 science’s post-W.W. II decline in, 39 scientific dominance of, 9, 11, 39 university system’s rise in, 35, 37 Bundy, Don, 222 Burton, William, 128 Bush, Vannevar, 123–24

Cavendish Laboratory (Cambridge University) DNA and, 9 DNA decoding at, 16, 113 Nobel Prize winners from, 38 scientific dominance of, 35–39 working relationships at, 115 Cayley, George, 129 centralization. See government centralization Centre for Vision in the Developing World, 20, 212 CERN. See European Center for Nuclear Research Chadwick, James, 14 Chadwick, John, 107, 109, 110, 114 Chandrasekhar, Subrahmanyan, 13 Chapman, Allan, 11 Child Self-Refraction Study, 222–23 China culture of, 13–14 science doctoral programs in, 46 science’s rise in, 46 scientific dominance in, 9–10 Clarendon Laboratory (Oxford University), 20 Colbert, Jean-Baptiste, 28 Cold Spring Harbor Laboratory, 69 College de France, 28, 29 combustion engine, 127, 128 commercialization, 42, 47, 48 communication, 41, 45–46 companies, 19. See also spin-offs; start-up companies Copenhagen Institute for Theoretical Physics (Niels Bohr Institute) annual meetings at, 57 Bohr and, 9, 10, 56 Copenhagen Spirit at, 56–57 funding of, 66–68 physicists at, 58–59 physics at, 14 Corbino, Orso, 61 creative environments Bell Labs’ rules and, 158 description of, 151 ideal leadership for, 155–56 leader’s role in, 152 preferred leadership types for, 154–55 quality of life for, 157

Cambridge University. See also Cavendish Laboratory Cavendish Laboratory’s dominance at, 35–39 scientific research dominance of, 35, 37 Carlsberg Foundation, 67 Carlson, W. Bernard, 205 Carnegie Foundation, 85 cash prizes, 247–48

256

index resource competition and, 157 resources and, 156–57 rules for, 158 creativity, 3. See also entrepreneurial creativity; exceptional creativity; technical creativity autonomy and, 156 breakthroughs and, 206–7 diffusion and types of, 169 enhancing techniques of, 147 environmental requirements for, 147, 151 formal education and, 102–3 four elements of, 150–51 genetic component of, 146–47 operational definition of, 145 problem-solving and, 199–200 resources and, 156–57 Crick, Francis, 16, 107, 108, 109, 110-18, 244–45 crowdsourcing, 194–96 Csikszentmihalyi, Mihaly, 102 cultures, 14. See also Chinese culture Cusco, Dr., 216 customer companies and, 19 innovation and, 200 Microsoft and, 200–201 Nintendo and, 203 Segway and, 201–2, 208n15

market adoption of products and processes, 167 digital photography, 202 Dirac, Paul, 241, 242, 243, 249 DNA. See deoxyribonucleic acid doctoral programs China and, 46 Einstein and, 103–4 exceptional creativity correlation to, 105–6 Dow Corning Corporation, 20 “dynamoptometre,” 216 Dyson, Freeman, 6 Dyson, George, 15, 245 economy, 10, 11, 12, 18, 130–31, 164, 167 Edison, Thomas, 3, 17 alternating current and, 127 electric light bulb of, 125–27 as electric power grid’s innovator, 124–26 engineering and, 125 entrepreneurship of, 205–6 as exceptional creativity’s epitome, 204–5 flawed arguments against, 137n5 formula manual of, 137n4 as high-resistance lamp’s inventor, 126 innovation and, 205–6 phonograph and, 205–6 power grid fundraising and, 125, 126 education. See also formal education; higher education America’s “silo” structure in, 41 artists and, 102–3 China’s science doctoral programs and, 46 engineering and, 136 exceptional creativity’s optimum level of, 105 France’s government supervision of, 30 German system of, 31–32 study of innovators in, 136 Einstein, Albert, 16, 19, 32, 54 as Bose’s collaborator, 10 as IAS’s most prominent resident, 83, 89 leak-proof refrigerator patent of, 193 PhD and, 103–4 Planck and, 192 relativity theory and, 3, 103–4 as Swiss Patent Office employee, 103 University of Zurich and, 103–4

Dahlem, Germany, 32 DARPA. See Defense Advanced Research Projects Agency Darwin, Charles, 16, 37, 241, 242, 243, 249 Davies, Paul, 228 Defense Advanced Research Projects Agency (DARPA), 247–48 deoxyribonucleic acid (DNA), 106 Cavendish Laboratory and, 9 decoding of, 16, 113 Dyson’s observations on story of, 244–45 Franklin’s contributions to decoding of, 111–12 Pauling and, 112, 114–15 Wilkins’ DNA X-ray images of, 111 Diamandis, Peter, 247–48 diffusion as complementary with invention and innovation, 170 creativity types and, 169

257

index electric light bulb, 125–27 electric power grid alternating current and, 127 Edison’s innovation of, 124–26 funding of, 124, 125 electromagnetic radiation, 54 electronic computer. See also personal computers microchip and, 130–31 microprocessor and, 134, 172, 179 as new economy’s growth engine, 130–31 Electronic Computer Project. See also electronic computer applications of, 95–96 funding of, 94 IAS and, 15, 92–96, 245 electronics explosive growth of, 130–31 microchip and, 130–31 microprocessor and, 134, 172, 179 semiconductors and, 131–32 transistor and, 132–33 triode and, 131, 33 electrons, 55 engineering automobile and, 128 combustion engine and, 127, 128 core activity of, 135 crude oil refining and, 128 education for, 136 flight and, 129 microchip and, 135 science and, 124, 125, 127–28, 134, 135 science’s differentiation from, 135 transistor and, 132–33 entrepreneurial creativity bottom-of-the-learning-curve-pricing and, 176, 177, 178 characteristics of, 163, 164–65 economic growth and, 18 engineering creativity and, 164 scientific creativity and, 164 software and, 183 spin-offs and, 182 technical feasibility and, 163 value creation for society and, 163 volume discounts and, 177 entrepreneurial implementation, 165 entrepreneurial skills, 18, 174

entrepreneurs bottom-of-the-learning-curve-pricing and, 176, 177, 178 skills required for, 174 volume discounts and, 177 entrepreneurship academics and, 196 Edison and, 205–6 Microsoft and, 197–99 qualities of, 196–97 wireless sensor technology and, 197 European Center for Nuclear Research (CERN), 15, 69 Evans, Arthur, 107, 108, 110, 112, 113, 115 exceptional creativity, 3, 206 America’s future and, 46–48 artists and, 102 Britain and, 11–12 cash prizes as incentive for, 247–48 Edison as epitome of, 204 Einstein as example of, 192 Electronic Computer Project and, 94–96, 245 5 aspects of, 53–54 formal education and, 16, 102–3 French government’s stifling of, 29–30 Germany’s authoritarian constraints on, 31–32 higher education’s impact on, 103 India and, 13 individuals and, 13 institutional support and, 16, 117 IQ correlation with, 4–5 optimum education level for, 105 PhD correlation and, 105–6 science education and, 106 Eysenck, H. J., 118 Faraday, Michael, 241, 242, 243, 249 Fermi, Enrico as experimentalist, 14, 15, 60 fundraising and, 68–69 as great mentor, 63 neutrino and, 63 nuclear reactors and, 72 as practical problem solver, 61–62, 64 as Rome University’s physics chair, 61 student remembrances of, 64 University of Chicago and, 64 weak interactions, discovery of, 63

258

index as world’s leader in atomic energy, 66 Fermi National Acceleration Laboratory (Fermilab), 65 Feynman, Richard, 5 Flexner, Abraham, 15, 85, 86–87, 88–89 Flexner, Simon, 84, 85 flight, 129. See also Wright, Orville; Wright, Wilbur Ford, Henry, 4, 127–28 formal education artists and, 102 creativity and, 102–3 Csikszentmihalyi and, 102 Dyson on, 249 exceptional creativity and, 16, 118 geniuses and, 101 optimum creative level of, 105 scientists and, 102 scientists with, 99 Twain on, 101 Young and, 101 France distinguished scientists in, 27 government centralization’s impact on, 28 hegemony of, 9, 25–26, 27–30 under-investment in science training in, 29 Franklin, Rosalind, 109, 110, 111–12, 244–45 French Revolution, impact of, 28 French university system, 30 Fuld, Carrie, 86 fundamental science importance of, 42, 49 innovation’s time horizon in, 47 physics’ discoveries and, 54, 59–60 fundraising Bohr and, 66–68 Edison and, 125, 126 Fermi and, 68–69

general purpose technology (GPT) complementarity of component parts, 171–72 component parts’ interdependence, 171 integrated circuit as, 178, 181 geothermal sites, 231–32 Germany authoritarian constraints in, 31–32 distinguished scientists from, 34 hegemony of, 9, 25–26, 27, 31–35 Kaiser Wilhelm Institutes in, 31, 32–33 pre-Nazi decline of, 33 quantum theory in, 35 science education in, 31–32 scientific dominance of, 9 Gertner, Jon, 7 Ghana, 217–218, 220 Global Positioning System (GPS), 192, 239 Global Vision 2020, 219 globalization, science and, 47 Goddard, Robert, 130 Goldin, Daniel, 232 Google, 247 GPS. See Global Position System GPT. See general purpose technology Grove, Andrew, 173

Galileo mission ammonia clouds and, 236 asteroid belt and, 236 crash landing of, 237 Jupiter’s moons and, 236–37 as NASA’s deep space mission, 235–37 Gates, Bill, 249 Gear, David, 9, 247

IAS. See Institute for Advanced Study IC. See integrated circuit India, 10, 13 industrial revolution in 18th century, 166 in 19th century, 166 American, 124 white-collar automation and, 166

H1B visas, 149-50, 159n13 Hackwood, Susan, 17, 247 Hardy, G. H., 100 Hargadon, Andrew, 205 hegemon, 25–26 Hey, Jonathan, 19, 247 Hey, Tony, 19, 247 higher education in America, 86–87 creative achievement and, 103, 249 hive mentality, 195, 196 Hollingsworth, J. Rogers, 9, 105–6, 247 hydrogen bomb, 93 hypertext links, 196

259

index innovation, 3 bottom-of-the-learning-curve-pricing and, 176–78, 180 cash prizes as motivator of, 247–48 as complementary with invention and diffusion, 170 Edison and, 205–6 Edison’s power grid and, 124–26 educational curriculum for, 136 fundamental science’s time horizon in, 47 goal of, 169 invention’s path to, 5 meeting customers’ needs and, 200, 201, 202, 208n15 as moneymaking ideas, 123 NASA spin-offs and, 239–40 open platforms of, 195 overlap of technical feasibility and marketability, 169, 174 past standards of, 22n3 recombination and, 177–78, 185 scope of, 185 technology and, 5 as usable and marketable, 167–68 Institute for Advanced Study (IAS) Bamberger’s endowment of, 87 computer project at, 245 Einstein at, 83, 89 Electronic Computer Project and, 15, 92–96, 245 engineering facilities at, 93 faculty appointments and compensation of, 90–91 first professorships at, 89 Flexner’s directorship of, 15, 87, 88–90 governance of, 91 mathematics research and, 89 Oppenheimer at, 92, 243–44 Princeton University’s connection to, 88, 90 Veblen and, 84, 89, 90 von Neumann’s computer project at, 15, 92–93 institutions. See also organizations exceptional creativity and, 16, 117 working relationships in, 115 integrated circuit (IC), 170, 172, 175–76, 178, 181 Intel, 134, 173, 177 International Education Board (IEB), 67

Internet, 4 crowdsourcing and, 196 hive mentality and, 196 invention bottom-of-the-learning-curve-pricing and, 176–78, 180 as complementary with innovation and diffusion, 170 as economy’s knowledge, 167 as innovation’s path, 5 as linked complementary series, 182 new scientific and engineering ideas and, 167 scope of, 185 as technical knowledge, 167 IQ emergenic component of, 146–47 exceptional creativity correlation with, 4–5 technical creativity and, 146 testing of, 4–5 Jobs, Steve, 249 Kaiser Wilhelm Institutes, 31–33 Kamen, Dean, 201 Kepler mission extra solar planets’ detection by, 233 Pale Blue Dot and, 233 purpose of, 233 Kilby, Jack, 17, 133–34, 176 Kinect. See Microsoft Kipman, Alex, 197–98 Kittel, Charlie, 76 Kober, Alice, 109, 110, 112 Kuhn, Thomas, 3 Laboratory of Molecular Biology, 25 Langley, Samuel P., 128–29, 138n15 Lanier, Jaron, 195 laser, 5, 202 LCROSS. See Lunar Crater Observing and Sensing Satellite LED. See light-emitting diodes light bulb. See electric light bulb light-emitting diodes (LED), 13 Lindbergh, Charles, 247 Linear B, 16 Bennett’s classification of, 111 decipherment announcements for, 108

260

index Evans and, 107 Kober’s decipherment contributions and, 112 Ventris’s collaboration and, 116 Ventris’s decipherment of, 107 Lohmann, Adolf, 216 Loyalty Oath, 76, 78 Lunar Crater Observing and Sensing Satellite (LCROSS), 234

Myres, John, 109, 110, 112-13, 115 Nabarro, David, 217 NAI. See NASA Astrobiology Institute Nakamura, Shuji, 13 NASA. See National Aeronautics and Space Administration NASA Astrobiology Institute (NAI) Blumberg and, 9 Blumberg as director of, 229, 232 Blumberg’s introductory address at, 229–30 collaborative goals at, 230 international cooperation and, 230–31 management structure of, 230 organization of, 229 Virus Focus Group of, 232 National Aeronautics and Space Administration (NASA) astrobiology program established at, 228 Blumberg at, 232 deep space missions of, 235–37 device spin-offs from, 239–40 Galileo mission of, 235–37 GPS and, 239 Kepler mission of, 233 lunar and Earth orbiting satellites of, 234 Mars and, 234–35 New Horizons’ mission of, 237 observatories of, 237 National Science Foundation, 123–24 Nelson, Ted, 196 neutrino, 63 Newton, Isaac, 16, 37 Niels Bohr Institute. See Copenhagen Institute for Theoretical Physics Nintendo market focus and, 203 successful technology and, 202–3 Wii gaming system of, 202–3 Wii’s marketing by, 203–4 Nobel Prize, 11 Alvarez and, 4–5 Anderson and, 7 Bardeen and, 8 Blumberg and, 6 Britain’s abundance of, 39 Cavendish Laboratory winners of, 38 Chandrasekhar and, 13 Feynman and, 5

R. H. Macy & Co., 86 magnetron microwave generator. See radar management Bell Labs and, 8, 71–72, 75–76, 78, 80 French style of, 29–30 NAI structure of, 230 people, 173–74 scientific, 127–28 scientific management and, 127–28 Shockley and, 8, 76 Mars Exploration Rovers, 234–35 Mars Science Laboratory, 235 Massachusetts Institute of Technology, 15 Max-Planck Institutes, 25, 31 Maxwell, James Clerk, 37 mentoring, Bohr and, 57–58 Micro Instrumentation and Telemetry Systems (MITS), 183 microchip development of, 133–35 electronics and, 130–31 as engineering insight, 135 Kilby and, 133–34 Noyce and, 133–34 microprocessors, 134, 172, 179 Microsoft, 5 Bones and, 198 consumers’ needs and, 200–1 Kinect and, 19, 197–99, 204 Research Lab and, 196, 198–99 Xbox video game and, 19, 197, 198, 199, 203, 204 Minoan Linear B. See Linear B MITS. See Micro Instrumentation and Telemetry Systems Model T, 127 Moore, Gordon, 173–74 Moore, Patrick, 11 Mortensen, Peter, 200 Musee de l’Histoire Naturelle, 29–30

261

index German winners of, 32 Raman and, 13 Shockley and, 4 winners’ statistics of, 148 Noyce, Robert, 17, 133–34, 173, 176 nuclear physics, 14 nuclear reactor, Bell Labs design of, 72 nucleus, atomic, 59–60

Pope, Maurice, 108 Princeton University, 88, 90 problem-solving, 199, 200 production set, 165 Project Hindsight, 134–35 quantum mechanics Copenhagen Institute for Theoretical Physics and, 14 as physics’ greatest revolution, 59 practical ramifications of, 65–66 quantum theory, 35

open-innovation platforms, 195 open-source projects, 194–95 Oppenheimer, J. Robert, 92, 95, 243–44 optics, 213, 214, 220 organizations. See also institutions; research organizations big science and, 47–48 Dyson on, 242 exceptional creativity in, 42–43 Orteig, Raymond, 247 Otto, Nikolaus, 127 Oxford Instruments, 3, 22n1 Oxford University, 16, 35, 37

radar, 72–73 Raman, C. V., 10 Ramanujan, Srinivasa, 16, 99–101 Randall, John, 115, 244 Rask-Oersted Foundation, 67–68, 68 read-only-memory (ROM), 179 recombination, innovation and, 178–79, 185 relativity theory Einstein and, 3, 103–4 GPS and, 192 research organizations constraints of, 44–45 as small interdisciplinary groups, 44, 45, 46, 47–48 Roberts, Ed, 180–81 Roberts, Richard, 228 Robinson, Andrew, 243, 244 Rockefeller, John D., 67–68 Rockefeller Institute for Medical Research, 84 Rockefeller University, 25 ROM. See read-only-memory Royal Society, 100 Rutherford, Ernest, 54, 57–58

Pale Blue Dot (PBD), 233 Pasteur, Louis, 29, 30 patent Einstein’s leak-proof refrigerator, 193 Shockley’s transistor, 23n8, 132–33 for Silver’s variable focus lens, 218 technical creativity and, 153–54 university trustee and faculty correlation to, 153–54 Pauling, Linus, 109, 110, 112, 114–15 PBD. See Pale Blue Dot PC. See personal computers Peng Gong, 13–14 personal computers (PC), 170 Altair and, 180–81 CP/M machines and, 185 as economic growth driver, 164 future of, 185–86 growth of, 183 as mass-market volume business, 181 software and, 183 spreadsheet and, 184 word processor and, 184 PhD. See doctoral programs phonograph, 205–6 physics. See nuclear physics Planck, Max, 32, 54, 192

sabbaticals, at Bell Labs, 77, 78 Sagan, Carl, 233 Schrödinger, Edwin, 32 science. See basic science; big science; fundamental science Segrè, Gino, 14, 243 Segway Personal Transporter (Segway), 19, 201–2, 204, 208n15 self-adjustable eyeglasses, 20 astigmatic correction in, 222–23 Britain’s Department of Health and, 223–24

262

index children and, 222 cost of, 224 distribution of, 217–18, 219–20 Dow Corning and, 20, 217–18 fluid-filled lens prototype of, 213–15 Ghana’s field trial of, 217–18, 220 health care impact of, 223 need for, 214 practitioners’ probable reaction to, 220 prototypes of, 213–15 research funding for, 223 Silver as inventor of, 20, 212–13 Silver’s target market for, 224 testing standards of, 219 trials using, 217–18, 220–22 wearer’s accuracy of, 219 self-refracting glasses. See self-adjustable eyeglasses Selig, Hecht, 32 semiconductors Bell Labs program on, 72, 73, 76, 77 research on, 131–32, 172–73 Shockley, William, 72 management and, 8, 76 as Nobel Prize winner, 4–5 semiconductor research and, 131–32, 172–73 transistor patent and, 23n8, 132–33 Shockley Semiconductor, 172–73 Shotton, Jamie, 198 Shulman, Bob, 79 Silicon Valley, 135–36 knowledge stock of, 173–75 market knowledge in, 175 Silver, Joshua, 20, 247 Simonton, Dean Keith, 104–6 Smithsonian Institution, 128 software, 183–84 solid-state physics, 72–73, 74, 75–76 spectroscopy, 73 spin-offs, 182, 239–40 spreadsheet, 184 spreadsheet, PC and, 184 Standard Oil of Indiana, 128 start-up companies Dyson’s thoughts on, 249 Google as example of, 247 Silicon Valley and, 135–36, 173–75 Steinmetz, Charles, 127

superconductivity, 77 Swiss Patent Office, 103 ’t Hooft, Gerard, 192–193 Taylor, Frederick Winslow, 127–28 teams, 41 technical creativity characteristics of, 145–46 collaboration and, 156 factors fostering, 157 five rules for, 158 H1B visas and, 149–50, 159n15 IQ correlation with, 146 leadership types for, 144–45 manager types and, 152–53 nurturing problems of, 147 patents and, 153–54 university departments and, 151–52 technical progress annual rate of, 170 input availability constraints and, 165–66 production set and, 165 as production set’s expansion, 165 technological revolution, 248 technologist-managers, 173–74 telephone, 5 Terman, Frederick, 4, 135 Terman, Lewis, 4–5 Tesla, Nikola, 127 Theory of Inventive Problem Solving (TRIZ), 200 Thylefors, Bjorn, 217 Townes, Charles, 5, 73 transistor basic science and, 132–33 Brattain and, 23n8, 132–33 engineering and, 132–33 as engineering problem’s solution, 135 Shockley’s patent and, 23n8, 132–33 as vacuum tube’s replacement, 133 triode, 131, 133 TRIZ. See Theory of Inventive Problem Solving Tusa, John, 101–2 Twain, Mark, 101 20/20 vision U.S. and United Kingdom standard for, 219 WHO’s acuity standard for, 219–20

263

index United States. See America University of Chicago, 64 University of Göttingen, 33–35 University of Zurich, 103–4 Upton, Francis, 126

Whittle, Frank, 130 WHO. See World Health Organization Wii Gaming system. See Nintendo Wikipedia, 194 Wilkins, Maurice, 109, 110, 111, 244–45 wireless sensor technology, 19, 197–99 word processor, 184 World Health Organization (WHO), 217, 219–20 World War II, 39, 130 World Wide Web, 4, 19, 196 Wright, Martin, 216 Wright, Orville, 17, 129 Wright, Wilbur, 129 Wurtz, Adolphe, 30 W.W. II. See World War II

vacuum tube. See triode value creation, 163 variable-focus lens, 216, 218 Veblen, Oswald, 84, 89, 90 Veltman, Martinus, 192–93 Ventris, Michael, 16, 107, 108, 109–14, 116–18, 243 vision correction, 211, 214, 216–17, 218 volume discounts, 177 von Neumann, John, 15, 92–96, 245–46

Xbox video game. See Microsoft

Wai, Conrad, 200 Wales, Jimmy, 194 Watson, James D., 107, 108, 109, 110, 113–14, 244–45 Westinghouse, George, 127 White, Kevin, 219

Young, Thomas, 12, 101 Zuckerberg, Mark, 249

264