Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely [1st Edition] 9781118546420

Promoting the Planck Club presents rich mini histories of selected scientists whose work led to radical and transformati

386 106 3MB

English Pages 222 [235] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely [1st Edition]
 9781118546420

Citation preview

Promoting the Planck Club

Promoting the Planck Club How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely

Donald W. Braben

Cover Design: Wiley Cover Photograph: Image taken from the Hubble Deep Field, a photograph of a region of sky 2.5 arc minutes across within the constellation of Ursa Major. Credit: Robert E. Williams, the Hubble Deep Field Team and the National Aeronautics and Space Administration Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Braben, D. W., author.   Promoting the Planck Club : how defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely / by Donald W. Braben.    pages cm   Includes bibliographical references and index.   ISBN 978-1-118-54642-0 (pbk.)   1.  Scientists–Biography.  2.  Science– History.  3.  Discoveries in science.  I.  Title.   Q141.B775 2014   509.2′2–dc23 2013033554 Printed in the United States of America. 10  9  8  7  6  5  4  3  2  1

To Bill, Margie, and Ken, and to the memory of Jean

Major breakthroughs in science invariably involve the amalgamation of a kaleidoscope of disparate research studies making the development of any rational strategies a futile exercise. There are as many ways to do outstanding science as there are outstanding scientists. Research often starts off in a specific direction but, as results, unfold new avenues open up. Discoveries that appear to arrive from “left field” litter the history of the sciences and serve as ubiquitously unheeded warnings to those who think they know how research should be carried out and what science is important. In this crucially important book, Don Braben has assembled an overwhelming case based on a plethora of historically significant scientific breakthroughs. He shows how foolhardy and, in fact, dangerous for the economy are the present research funding strategies, which focus primarily on “impact” when it is blatantly obvious that, as far as fundamental science is concerned, “impact” is impossible to assess before a fundamental advance has been made. I only hope that the people who presently control research funding are prepared to read this book, think carefully, and heed the advice. Harry Kroto, The Florida State University, Nobel Laureate Don Braben’s sobering book is right on the mark regarding the current disastrous path of funding of scientific research. Funding agencies are increasingly making decisions based on the proposed research’s perceived impact and benefit for society. As Braben documents so well, the emphasis on short-term performance cannot lead to scientific revolutions such as Rutherford’s discovery of the nucleus and Townes’ invention of the laser. Scientists now eschew risky proposals, knowing that someone on a review panel will say the work is “impossible.” Even when scientists are able to secure funding, much of their time is sapped by the increased paperwork, such as frequent reports on how “benchmarks” are being achieved. If a scientist dares to spend a few years developing a novel idea, his or her funding will be lost because of the “lack of productivity.” Braben proposes an approach to turn the tide of preoccupation on short-term performance: each funding agency could set aside a small portion of its budget to fund non-peerreviewed proposals. Braben illustrates how this could work using as a model the Venture Research Program he directed in the 1980s. One can hope that Braben’s model will be widely adopted—it could change the landscape of science in future decades. Harry L. Swinney, University of Texas at Austin, Member of the US National Academy of Sciences Funding agencies and policy-makers should emulate Don Braben’s clear thinking, straight talking, wise values, broad learning, and acuity of insight. They might then liberate science, embolden innovation, and inspire academics in a more rational, prosperous, and interesting world. Felipe Fernández-Armesto, University of Notre Dame, Indiana

Contents

List of Posters

ix

Foreword

xi

Acknowledgments

xv

Introduction

1

Chapter 1 Accidents, Coincidences, and the Luck of the Draw: How Benjamin Thompson and Humphry Davy Enabled Michael Faraday to Electrify the World

16

Chapter 2 Science, Technology, and Economic Growth: Can Their Magical Relationships Be Controlled?

27

Chapter 3 Max Planck: A Reluctant Revolutionary with a Hunger of the Soul

38

Chapter 4 The Golden Age of Physics

50

Chapter 5 Oswald T. Avery: A Modest Diminutive Introverted Scientific Heavyweight

79

Chapter 6 Barbara McClintock (1902–1992): A Patient, Integrating, Maverick Interpreter of Living Systems

89 vii

viii

Chapter 7 Charles Townes: A Meticulously Careful Scientific Adventurer

CONTENTS

99

Chapter 8 Carl Woese: A Staunch Advocate for Classical Biology

110

Chapter 9 Peter Mitchell: A High-Minded Creative and Courageous Bioenergetics Accountant

126

Chapter 10 Harry Kroto: An Artistic and Adventurous Chemist with a Flair for Astrophysics

139

Chapter 11 John Mattick: A Prominent Critic of Dogma and a Pioneer of the Idea That Genomes Contain Hidden Sources of Regulation

158

Chapter 12 Conclusions: How We Can Foster Prosperity Indefinitely

174

Appendix 1 Open Letter to Research Councils UK from Donald W. Braben and Others Published in Times Higher Education, November 5, 2009

194

Appendix 2 Global Warming: A Coherent Approach

197

References

201

Index

206

List of Posters

Poster 1: On the Ease with Which the Future of a Fine Laboratory Can Be Jeopardized

20

Poster 2: Albert Einstein’s Inauspicious Youth

52

Poster 3: Sir William Macdonald and McGill University

59

Poster 4: Mach, the Universe, and You

67

Poster 5: The UK Medical Research Council (MRC) Laboratory of Molecular Biology

118

Poster 6: Ecological Niches

123

Poster 7: James Lovelock

128

Poster 8: Carbon

142

Poster 9: Big Bangs

143

Poster 10: Genome Libraries

170

ix

Foreword

In this provocative book, Donald Braben presents compelling data, cogent analysis, and vivid historical episodes tracing the immense economic and social impact of frontier scientific research. He focuses on revolutionary discoveries that emerged from decidedly unorthodox “outlier” work of a relatively few scientists. Those pioneers he designates as the “Planck Club.” The name is apt: Max Planck, when early in the twentieth century, confronted with experimental results inexplicable by well-established physics, reluctantly advanced an iconoclastic idea. After gestation for more than two decades, his idea gave birth to quantum mechanics, which profoundly transformed understanding of the nature of light and matter and produced a myriad of technologies. As in two sibling studies published by Wiley (Braben 2004 and 2008), Braben himself has emulated Planck. Armed with strong evidence, Braben has forthrightly challenged the now well-established and pervasive procedures for assessing and granting support for scientific research. These policies, based on “peer review” (actually, “preview” as Braben emphasizes) have evolved over decades. Well-intended, but in many respects deeply flawed, the procedures imposed have increasingly dire consequences. Many scientists share Braben’s deep concern that prospects for support of future work of Planck Club caliber are becoming severely limited. This case was made starkly by the late Luis Alvarez, assuredly a Planck Club member. In his autobiography (Adventures of a Physicist, 1987), he wrote: In my considered opinion, the peer review system, in which proposals rather than proposers are reviewed, is the greatest disaster to be visited upon the scientific community in this century. . . . I believe that U.S. science could recover from the xi

xii

FOREWORD

stultifying effects of decades of misguided peer reviewing if we returned to the tried-and-true method of evaluating researchers rather than research proposals. Many people will say that my ideas are elitist, and I certainly agree. The alternative is the egalitarianism that we now practice and that I’ve seen nearly kill basic science in the USSR and in the People’s Republic of China.

Alvarez would be still more dismayed by how US science has become further burdened by current funding policies. At top-flight research universities, many professors must seek funding from several agencies in order to maintain their research groups. That requires them to devote inordinate time to writing proposals and reports, to the detriment of their teaching, mentoring, and own creative efforts. Thereby, graduate education has been degraded. The vital need to generate grant proposals causes faculty to avoid teaching small, advanced classes and also to discourage their graduate students from taking courses not directly relevant to their research project. Serving as hired hands on a project is also a major factor in stretching out the time to obtain a PhD, since veteran students are most useful in obtaining results to justify a grant renewal. Once usually about 4 years, the median time to obtain a PhD is now 6 or 7 in most fields of science. For postdoctoral fellows, terms have likewise become prolonged. Overall, the funding system has tended to narrow the training of our young scientists, prolong apprenticeship, and inhibit changing fields. Braben acknowledges that peer previewing of proposals will likely remain prevalent. Then it is all the more important to address problems and advocate feasible reforms. Here I want to augment his suggestions by commenting on two aspects. First, the previewing process, as now implemented, is needlessly capricious. Typically, National Science Foundation and other agencies accept grant proposals only during a “window” that is a month or so wide each year. The applicant usually is not informed of the fate of the proposal for a full year or more and is not provided with the assessments of the five or so anonymous previewers until a few weeks later. That deprives the applicant of objecting if one or more of the assessments is egregiously in error, or even resubmitting a revised proposal until the next window, another year hence. Such a system is misnamed “peer review.” For papers submitted to scientific journals, the author can respond to objections of anonymous reviewers, so has a fair chance to persuade the editor that the paper merits publication. I suggest that funding agencies try out a similar approach. The grant applicant could be given the option to post the proposal on a web site to which only viewers registered with the funding agency are given access. The agency would post the assessment from each anonymous previewer as soon as it has been received. Then the applicant could respond to criticism and actually be a “peer” in, say, two or three exchanges with the previewers. Also, the applicant and perhaps the agency, could designate a few other scientists, not anonymous, to have access to the web site and post comments on both the proposal and the anonymous assessments.

FOREWORD

xiii

Second, funding of university research is largely to support graduate students and postdoctoral fellows, an essential investment in producing our scientific workforce. That investment is weakened by inflation of the time to obtain a doctorate, which makes pursuit of a scientific career less attractive to many students, especially women. In my generation, young scientists usually launched their independent research careers before reaching 30; now that is rare. For scientists receiving their first grant from the National Institute of Health, the median age has reached 42. That alarming situation has led the current director of NIH to initiate an “Early Independence Program,” for exceptional students, providing funds to enable them to bypass usual postdoctoral work and pursue their own ideas. I hope more such programs appear but urge that a much wider, radical approach is needed, which I’m convinced would markedly shorten the apprentice time and enhance its quality. Stipends in support of graduate students (and eventually postdoctoral fellows also) should be uncoupled from project grants to individual professors. The same money could be put into expanding greatly fellowships students could win for themselves, as well as into block training grants to university science departments. Winning a fellowship or obtaining a training grant profoundly influences a student’s outlook and approach to research; they are certified as national resources rather than as hired hands. Also important is the freedom to choose, without concern for funding, which research group to join. That would especially benefit young faculty. In applying for the student support (as done now for more limited NIH training grants), science departments would need to shape more coherent graduate programs, designed to produce doctorates who have broader backgrounds and perspectives and who are better equipped to be architects of science rather than narrow technicians. Donald Braben deserves gratitude from everyone concerned about wisely managing our investments in science, particularly in developing our future scientists. May a “Braben Club” arise to amplify his clarion calls! Dudley Herschbach Professor of Chemistry and Nobel Laureate University of Harvard

Acknowledgments

I have enjoyed lavish support from Nick Lane and David Price in bringing this book to fruition. They have given freely of their time and energies, been unfailing in their friendships, and offered comments on the material to be discussed. I am grateful to Sir Malcolm Grant, University College London’s Provost, for being the only university head to go for the Venture Research philosophy. John Allen, William Amos, Paul Broda, Terry Clark, Rod Dowler, Irene Engel, Nina Fedoroff, Desmond Fitzgerald, Nigel Franks, Pat Heslop-Harrison, Dudley Herschbach, Herbert Huppert, Jeff Kimble, Nigel Keen, Roger Kornberg, Harry Kroto, James Ladyman, Mike Land, Peter Lawrence, Chris Leaver, John Mattick, Graham Parkhouse, Beatrice Pelloni, Martyn Poliakoff, Richard Pettigrew, Doug Randall, David Ray, Martin Rees, Peter Rich, Rich Roberts, Ian Ross, Ken Seddon, Colin Self, Iain Steel, Harry Swinney and Claudio Vita-Finzi have also generously supported me in my long crusade, some of them for many years, and would, I believe, also support my recommendations. I am grateful to Michael Ashburner, Tim Atkinson, Tim Birkhead, Peter Cameron, Richard Cogdell, David Colquhoun, Robert Constanza, Steve Davies, John Dainton, Peter Edwards, John Ellis, Felipe Fernández-Armesto, Andre Geim, Ann Glover, Frank Harold, Robert Horvitz, Tim Hunt, Alec Jeffreys, Angus Macintyre, Bob May, Philip Moriarty, Kostya Novoselov, Andrew Oswald, Gerald Pollock, Gene Stanley, John Sulston, John Meurig Thomas, Gregory van der Vink, Lewis Wolpert, and Phil Woodruff for their general encouragement and support. Phil Meredith and his colleagues at UCL’s Earth Sciences Department have for many years unconditionally accepted my participation in their weekly research seminars, for which I am grateful. But above all, I wish to thank my wife, Shirley, and also David and Wendy, Peter and Lisa, and Jenny xv

xvi

ACKNOWLEDGMENTS

and David for the consummate skill with which they have dealt with a distracted husband, father, and father-in-law in addition to their usual feedback, love, and affection. Don Braben January 2014

Introduction

The sciences play almost as vital a role in everyday life as the air we breathe. The water from our taps, the food we eat, our jobs, communications, travel, leisure activities, health, and unprecedented longevity all owe huge debts to science. However, such simple factual statements give no hints about the mountains of complexity that had to be overcome before any of these gains could be realized. The most important lesson to be learned is that science does not necessarily progress with the march of time. There is nothing inevitable about it; centuries may pass without any progression, and prolonged stagnation has been the usual result. Although science has led to the generally high living standards that most of the industrialized world enjoys today, the astounding discoveries underpinning them were made by a tiny number of courageous, out-of-step, visionary, determined, and passionate scientists working to their own agenda and radically challenging the status quo. Indeed, twentieth-century life was dominated by the unpredicted, revolutionary discoveries of about 500 of these pioneers. I call this seminal fellowship the “Planck Club” in honor of its first member (so to speak), Max Planck, who in Berlin on December 14, 1900, somewhat reluctantly announced that he had discovered an important new property of the universe. As I explain later, his work inspired a revolution, and nothing in science thereafter would ever be the same. The Planck Club’s uninhibited explorations eventually transformed our lives, yet many had to wait for years before the scientific community finally accepted them. Not surprisingly, it needed time to adjust to the radically new Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

1

2

INTRODUCTION

mental pictures and ways of thinking that the discoveries required even after their authenticity had been conclusively demonstrated. Old habits die hard. However, after about 1970, when most Planck-Club campaigns had either come to fruition or were within range of doing so, the considerable expansion of the academic sector and its demand for funds led to the progressive introduction of new policies for dealing with the huge funding shortfall. Astonishingly, considering that academics are noted for their individuality, the policies adopted turned out to be virtually the same everywhere. Common themes have been that research selection processes should be as free from favoritism and discrimination as possible and should aim to support the researchers who will make the most efficient use of requested resources. Such fairness-based policies have been easy to sell to the public and academics generally as they can be presented as being above suspicion and as being the best ways of allocating scarce resources. Everyone with a good idea should have the same chance of getting funded, of course, but fairness is a social concept. It can be achieved only by collective decisions. For research selection, adoption of the now ubiquitous new policies means that freedom to explore without restrictions or control has been replaced by Byzantine procedures in which funding agencies seek endorsement from a selection of an applicant’s peers before they will consider their proposals—peers who, of course, are drawn from the notoriously conservative scientific community. To make matters worse, peers are usually allowed to express their opinions anonymously. My implied criticism here might be surprising, as anonymity surely means that peers can express their opinions without fear of the consequences, which of course is a laudable aim. However, scientists are also people; and, when asked to comment on the ideas of a close rival (or would-be rival), we should expect that some scientists might be unable to resist an opportunity for putting the boot in if they can get away with it. Indeed, as I argue, these well-intentioned but misguided policies are having disastrous consequences and are, in effect, unprecedented, globalscale gambles with future prosperity. The overwhelming majority of members of the Planck Club were academics, a section of the community often renowned for their supposedly other worldly detachment and indifference to the problems of real life. Nevertheless, their work inspired the creation of such down-to-earth technologies as the laser and myriad of their spin-offs, countless components of the electronic and telecommunications revolutions, nuclear power, biotechnology, and medical diagnostics galore, all of which are now indispensable parts of everyday life. They also gave huge boosts to economic growth throughout the century. Some $100 trillion in today’s currency would probably be a conservative estimate of their centennial global value, but economics is not a precise science, and who can put a value on the intangible benefits they brought to quality of life? Fine, you might say, that’s what academics do; but get your tenses right. That is indeed what they did, but bureaucracy has now intervened. For most of the twentieth century and indeed for most academic research—large projects and

INTRODUCTION

3

major national initiatives excepted—government policies were, in effect, to not have specific policies. Funds were always tight, but appointed academics were usually free to tackle any problems that interested them as long as the necessary funds were modest. However, by about 1970 the scale of academic research had become too large to leave unmanaged. Most academics are publicly funded, and today researchers are not allowed to lift the proverbial test tube without having to convince their peers that the effort would be the best use of the required resources. Proposals usually take months to prepare, and most fail at this compulsory hurdle; sadly, many agencies either do not allow resubmissions or strictly control them. These policies lead, therefore, to frustration and colossal wastes of time and energy. Had they applied throughout the twentieth century, it is unlikely that the work leading to the most radical discoveries would have been funded simply because the researchers who made them were necessarily out of step with their colleagues, and life today would be unrecognizable. Academia is not the only source of scientific discovery, of course. Indeed, major industrial companies such as Bell Labs, BP, GE, and IBM were once altruistic and visionary in the research they would support, and spawned many Planck Club–type discoveries. Such large philanthropic organizations as those run by Andrew Carnegie, Howard Hughes, and John D. Rockefeller not only had similarly enlightened policies in the past but also have generally continued with them. Nowadays however, companies keep their scientists on tight leashes and firmly focused on short-term company benefit; and many other philanthropists now seek to target their giving and to increase the efficiency of its use—decisions that inevitably mean that they also concentrate on the “fashionable” fields. It is ironic that post ∼1970 there have been huge increases in the numbers of bodies devoted to science policy and such questions of how nations, companies, and organizations in general might improve their prospects by basing decision-making on the most robust advice available. One might think therefore that selection policies based on the opinions of an applicant’s closest competitors—or “peer review,” to give it its anodyne and widely accepted name—would have been evaluated ad nauseam long ago. As I have long argued, this arcane process by which future research is assessed should more accurately be called “peer preview,” which is the term I use henceforth. On the contrary, such consideration is conspicuous by its absence. Indeed, in the science-policy world, advice proudly presented as being “robust” simply implies that it has been thoroughly and properly assessed by peer preview. Thus, the ubiquitous funding bureaucracies have created their own set of catch-22 rules for ensuring that all criticism can be dismissed out of hand; comments on peer preview must, if they are not to be rejected, have peer preview approval. Thus, the received wisdom among scientific organizations everywhere today seems to be that any policy, advice, or research proposal that does not enjoy peer preview’s full blessing must be considered suspect or worthless.

4

INTRODUCTION

The question therefore arises: Will the twenty-first century produce a Planck Club as spectacularly successful as that of the last century? No one can answer with certainty, of course. The potential of science as a source of major new opportunities for humanity in general is as great as ever; but since the begetters of new ideas must now run unscathed through mandatory gauntlets of their fellow experts who do not even need to publicly reveal their identities, the probability does not seem high. As I hope is explained more fully in Chapter 2, these issues are not merely the usual abstract affairs beloved by academics and that have no serious interest for ordinary people. Indeed, they could hardly be more important. Unless funding agencies can answer this crucial question with a resounding and convincing “Yes!” we should all be very worried about prospects for growth and a stable society. Unfortunately, there are signs that it is not even being discussed. It seems to have been tacitly assumed by governments and funding agencies that creativity will not be adversely affected by the radical policy changes and that we can continue to rely on academics to come up with steady streams of priceless new ideas as they always have done. In any event, public funding has always been subject to severe pressure, now made much worse by the current economic crises, of course; and the consensus among governments seems to be that academic research should not be exempted from the rigorous controls with which others must cope. By the end of 1941, it was beginning to be clear that the Allies would not lose the war. Economists and politicians began to think about how the postwar world should be changed and, in particular, how we might avoid the mistakes of the last war; mistakes, many asserted, that had led to the current conflagration. To say the least, views were diverse. Joseph Schumpeter (1883–1950) called for “incessant innovation,” and the “perennial gale of creative destruction.” John Maynard Keynes (1883–1946) believed that encouraging spending and discouraging savings could cure depressions. F. A. Hayek (1899–1992), an economic superstar following the publication of his The Road to Serfdom in 1944, believed in free markets, free trade, and sound money and was very influential in guiding the miraculous post-war German recovery—the Wirtschafstwunder—and won the Nobel Prize in 1974 for his “penetrating analysis of the interdependence of economic, social, and institutional phenomena.” They all wanted change and they all wanted growth, but there was little general agreement on precisely how that growth should be achieved. In July 1944, only a few weeks after the Allies’ D-day landings had begun, and with millions locked in mortal combat, the U.S. President invited 700 delegates from 40 countries to thrash out their differences on monetary policy at Bretton Woods. It did little to help the research enterprise. Money was desperately short, and an obvious priority was to get national economies moving again as quickly as possible. The Second World War had, of course, halted impartial scientific inquiry for a time, but it soon began to flourish again thanks to the passionate and sustained efforts of Vannevar Bush (1890–1974) in the US, and Henry Dale (1875–1968) in the UK, two countries that were

INTRODUCTION

5

then the world’s most scientifically influential. In 1945, economic problems were severe even by today’s standards. The US had 11 million personnel under arms, the UK some 3 million, all due to return home shortly and in need of jobs. Many millions of Europeans were homeless and hungry. Factories were heavily geared toward munitions, and the huge transition to profitable commercial operations and “normality” had to be accomplished as soon as possible. Against these stark imperatives, it would have been understandable had the authorities continued with the successful wartime policy of directing scientific research toward immediate national goals. Indeed, a powerful lobby led by US Senator Harley Kilgore wanted to set up an agency—an embryonic National Science Foundation (NSF)—that would indeed be under political control. In 1944, following a request from US President Franklin Delano Roosevelt, Bush prepared what became one of the most famous and inspirational reports ever written on scientific policy, Science: The Endless Frontier (Bush 1945), to take the initiative away from Kilgore. Its uncompromising recommendation to the President was for sustained federal commitment “to basic scientific research of no recognisable usefulness” (author’s emphasis). The dispute raged for some 5 years but was eventually resolved in Bush’s favor, leading to the creation of a largely independent NSF in 1950. On August 7, 1945, in the UK, the Nobel Prize–winning Henry Dale and President of the Royal Society wrote a lengthy and impassioned letter to the The Times in London making similar arguments (Braben 2008, p. 60). It concluded: The true spirit of science working in freedom, seeking the truth only and fearing only falsehood and concealment, offers its lofty and austere contribution to man’s moral equipment, which the world cannot afford to lose or diminish.

The clear vision of Bush and Dale has been confirmed again and again. Similar vision is required today. Unfortunately, we scientists have failed miserably in convincing politicians and the public that new sciences are like vitamins: unless they ensure by whatever means adequate sources of fresh supplies, every one of us will suffer severely, and the global consequences of failure could also be grave. However, I should stress that I am not making a plea for more funding. Current levels are adequate, even in the UK, which to say the least has never been a world leader in the funding stakes. But total freedom is an essential but missing ingredient nowadays. My task, therefore, is to describe the barely credible stories of some of the Planck Club’s precious few, to show what a vital role freedom has played in them and how their experiences justify my apparently extravagant remarks. Perhaps our biggest problem is that funding agencies have lost sight of the fact that radically new discoveries have almost always stemmed from a single person becoming aware of a new and potentially important question or making an observation that exposes current ignorance. Science has many examples, but my favorite was made by the German physician-turned-astronomer

6

INTRODUCTION

Heinrich Olbers (1758–1840), who in the 1820s famously publicized the question of why the night sky should be dark, though he was not the first to pose it. The stars we can see as bright specks are merely the ones closest to us. But if the universe is infinitely large—and there was no reason at that time to think it was not—there should be an infinite number of them. Our eyes might not be able to resolve them all, of course, but the light from an infinite number of stars, however individually feeble, should reach us anyway; and the night sky should therefore be as bright as day. A full answer to his paradoxical question is complex and had to wait for over a hundred years; it does not concern us here.* My point is that this profound question no doubt inspired countless scientists to grapple with it and reminded them of how little we truly understand, a lesson that indeed we should never forget. Without an awareness of ignorance, we are unlikely to have dissent, and without dissent, there can be no progress. It would seem, therefore, that humanity’s salvation surprisingly depends on a capacity to recognize ignorance. As soon as someone becomes aware of it and does something about it, humanity can hope to advance. Unfortunately, pioneers pointing to generally unrecognized areas of communal blindness are unlikely to be welcomed by senior apparatchiks and others delighting in the quality of the Emperor’s New Clothes and their control of the purse strings. Their implicit responses are that we should avoid territories in which operations cannot be efficiently managed and controlled: safer and more rewarding options can be found by intensifying the exploration of productive, predictable, well-chartered fields, as it can be argued that they offer the highest returns on investments. These policies might be defendable for industrial companies as their short-tem survival is clearly imperative, but for academic research they create serious limitations on the types of problems researchers are allowed to tackle. Indeed, they have changed the scientific landscape. Hitherto, its wild and unexplored terrain had always been a magnet for courageous and ambitious researchers; nowadays, however, those who choose the most accessible, obvious, and attractive objectives are given priority. Thus, current policies undermine the very spirit of research and exploration as they virtually ensure unsurprising outcomes. Indeed, it would seem that the future is now predictable. If we want to restore credibility, we must therefore find ways of restoring a faith in humanity’s unrestrained creativity that not so long ago was taken for granted; a faith that tolerated uninhibited pioneers, mavericks, iconoclasts, eccentrics, characters, rebellious youth and the awkward brigade in general, and which has now been abandoned. Although funding agencies and others have followed understandable routes to reach the present pass, we are unlikely to make progress until they recognize, tacitly or otherwise,

* There is a vast amount of literature. See, for example, Harrison (1987).

7

INTRODUCTION

that the quality of their reasoning over the past few decades has deteriorated and has sometimes become unscientific and even anti-intellectual. Pragmatism is rarely applicable to academic research without triggering a downside. The university, as an institution dedicated to creating universal knowledge and to inspiring the young, must be able to rise above the demands of necessity if it is not to become yet another mission-oriented, profit-seeking (or loss-avoiding) organization. Nevertheless, academics must also find new ways of demonstrating to skeptical politicians (or at least some politicians; see Chapter 12) and indifferent publics the immense value of allowing their qualified members to create freely and to exchange new ideas. My last two books, Braben (2004) and Braben (2008), showed how symptoms of today’s funding problems could be treated effectively. However, they did not fully address the formidable problems of causes, why we can no longer afford to ignore the widespread effects of the new arrangements on academic creativity in general, and the steps we might take to alleviate them. Today, policy makers focus on getting the best value for money—holier-than-thou policies they know will have wide public appeal. They are applied in every other field, so why should academics be allowed to lead featherbedded existences? However, they mandate funding agencies to focus on the most attractive benefits. Unfortunately, science is global, and its visible frontiers look much the same to funding agencies everywhere: the chief variables are the levels of resources they have at their disposal, and they can be considerable (see Table 1). Funding levels are seen by many as the benchmark of a nation’s commitment to growth. But the situation is more complicated than that. For example, the UK currently invests some 1.8% of its national income in Research & Development (R&D) compared with ∼2.8% in the US. However, the UK’s

TABLE 1.  Gross Domestic Expenditures on Research & Development as Percentages of GDP 2010 or latest available year

2001 or first available year

M E C X H SV L G K R POC TU L R ZA H F U N N Z RU L S IT ESA ESP C T Z PRE LU T C X H N N O R IR CA L G N B NR L SVD EU N 2 BE7 AU L F S O RA EC D IS AU L D T E U U S C A H D E N JP K KON SWP E FI N IS R

5.00 4.50 4.00 3.50 3.00 2.50 2.00 1.50 1.00 0.50 0.00 Source:  Organization for Economic Co-operation and Development Factbook, 2011–2012.

8

INTRODUCTION

gross domestic product (GDP) is only ∼15% that of the US. It therefore should be clear that if smaller countries choose to compete in the most fashionable fields, even if Israel and the Scandinavian countries, say, may be proportionately high investors, they risk disadvantaging their researchers as they will inevitably have much less cash than their competitors in the big countries. In 2010, US federal support for research at Harvard University alone was $600 million! However, Israel’s story is impressive. Israel’s proportion of the total number of scientific articles published worldwide is almost 10 times higher than its percentage of the world’s population. It may be a necessary condition that nations maintain competitiveness in the most important fields, and for the largest nations such a policy might also be sufficient. However, smaller nations must find an edge if they are to prosper. Progress in such fields as nanotechnology or molecular biology depends on access to competitive equipment, and, unfortunately, that equipment is also the most expensive. Focusing exclusively on priorities and competitiveness and relying totally on peer preview limit options and hand advantages to the biggest spenders in cash terms. However, many breakthroughs have come from shoestring budgets and the world-class use of the equipment between the ears of the brightest scientists, but they must have complete freedom. The picture painted here may surprise many. Science and technology seem buoyant. Moore’s Law* continues to defy logic as innovators find ever-more ingenious ways of improving electronic performance and reducing costs. New discoveries in biotechnology and nanotechnology abound. All this is true and should be deservedly applauded and expanded, but closer examination reveals that they are derivative on intellectual capital created decades ago. The counterintuitive revolutionary discoveries made by such greats as Max Planck and Albert Einstein apparently belong to the past. We are living off the seed corn, therefore. It has apparently been forgotten, or put aside, that there are many difficult, intractable, and vitally important problems—aging, consciousness, chemistry-at-a-molecular-address†, the nature of gravity, and the origin of life, for example—on which our ignorance is extensive. However, the only people likely to make progress in these or other unconquered fields are those who have personally identified a specific facet of ignorance that is of overwhelming importance to them, and who feel confident that they have viable ways forward.

* Proposed by Intel cofounder, Gordon E. Moore in 1965, Moore’s Law refers to the predicted doubling of performance every 2 years, including the size, cost, density, and speed of components, for the following 10 years. These predictions have not only been achieved but have continued until today and, indeed, show no signs of tailing off. † Chemistry has always been dominated by studies of well-stirred systems. But Nature never does chemistry in this way. No living system—you or me, for example—is well stirred, nor are the oceans, the atmosphere or, indeed, the universe. There is structure everywhere we look. Chemistry, as practiced by Nature, is always performed at highly specific times and places, as the processes within our cells confirm every second of our lives.

INTRODUCTION

9

Scientists working to others’ agendas and timetables are unlikely to have the unlimited courage, dedication, and determination required. When I speak on these issues, some scientists comment that I should stop rocking the boat. Many scientists are doing excellent work they say, so we should avoid taking actions, such as lobbying politicians or funding agencies that might jeopardize future funding. It could, they might say, tempt such bodies as government treasuries to reduce science budgets as I imply that taxpayers’ money is being wasted. However, I go much further than that; taxpayers’ money is being wasted because current policies encourage scientists to compete with each other rather than with Nature. There is no question that a vast amount of superb and valuable work is being done within the mainstreams today, categories that embrace the vast majority of research. It is also true that current policies are probably working as well in these fields as any possible alternatives. But, as the new policies bite deeper, science loses diversity and vitality. The current situation did not arise overnight. Scientists who staff funding agencies have over the years been forced, cajoled, or seduced by politicians (or by whoever they wish to blame) into accepting increasingly more stringent conditions on the funds they are employed to disburse. These problems are global and affect every country in the industrial world, and even those who currently aspire to membership of this elite group are being sucked into following similar paths. The UK is an egregious example of the unwisdom of current policies, and it leads the world in the search for new ways to manage and control academic research. It is well known that the UK’s economic performance has been sluggish for many years and that its industry has also invested much less in research and development than its international competitors.* However, rather than taking muscular industry to task for its myopia, UK government has taken the preposterous but much easier step of implying that the universities are to blame for the nation’s underperformance. A Government White Paper, Realising Our Potential (1993), said: Programmes which are successful in terms of the quality of research may offer no commensurate economic benefit to the country if firms and other organisations cannot use the results (author’s italics). The Government believes that a co-operative effort is needed to produce a better match between publicly funded strategic research and the needs of industry and other users of research outputs.

In 1994, government converted this passive belief into mandatory and unavoidable requirements for action by substantially changing the Research Councils’ Royal Charters, hitherto thought by many to be inviolable and timeless tablets of stone, specifically charging them with contributing to the UK’s economic competitiveness and quality of life. Their charters before that merely made them responsible for carrying out scientific research. There was no * See, for example, Monitoring Industrial Research (2011).

10

INTRODUCTION

debate with academics on the merits or consequences of this enormous change—it was merely announced. The academic community dutifully acquiesced. The UK differs radically in another important respect in that its Research Assessment Exercises (RAE)* now extend peer-reviews’ uses far beyond those of virtually all other industrialized nations. Initiated in 1986 and repeated five times (until 2008), they graded university departments according to complex logic-defying formulae that include quantitative estimates of the quality of each researcher’s work. The funding councils subsequently used these grades to determine a university’s research grant. They wielded formidable power, and universities that slipped a grade were suddenly required to adjust to the subsequent budget cuts and such social consequences as job losses. RAE logistics were as complex as any military operation. The last exercise in 2008 involved 67 panels of experts using ∼1000 panelists who together had to examine about 200,000 pieces of published work from about 50,000 academics in less than a year. It was the biggest peer review operation in the world. Each panelist graded on average some 50 researchers whose publications, please remember, had already necessarily satisfied the scrutiny of many layers of peer review. This is multiple jeopardy on the grandest of scales. In 2009, the Research Councils piled yet another layer of bureaucracy onto long-suffering but acquiescing researchers. The Pathways to Impact initiative requires applicants to prepare, in addition to their research proposal, a twopage summary of the potential social or economic impact their research might have and the arrangements they propose to make for ensuring that its potential impact is realized. This additional submission is also evaluated by peer preview even though it is well known that even battle-scarred industrialists accustomed to surviving in the marketplace cannot accurately predict future performance. Academic research once benefited from a culture that valued and encouraged creative and defiant youth. However, Pathways to Impact encourages anxious researchers to dissemble, exaggerate, spin, and, of course, aim for the most attractive goals as these are most likely to get funded. This radical development led the author to write an Open Letter to Research Councils UK (RCUK), the strategic partnership of the UK’s seven Research Councils, expressing concern about the effects the new initiative would have on creativity and requesting that the initiative should be withdrawn. The letter, published by Times Higher Education on November 5, 2009, was signed by 48 senior UK and US scientists including nine Nobel Prize winners; Appendix 1 contains the full letter. The letter also drew RCUK’s attention to a statement

* RAEs are administered by the parochial Higher Education Funding Councils set up in 1993 in England, Scotland, Wales and Northern Ireland for administering the universities. These bodies replaced the fiercely independent and national University Grants Committee, and are now explicit instruments of government.

INTRODUCTION

11

from the Russell Group; a body comprised of the UK’s leading research universities, issued in 2007: There is no evidence to date of any rigorous way of measuring economic impact other than in the very broadest of terms and outputs. It is therefore extremely difficult to see how such Panel members (those expert in the economic impact of research) could be identified or the basis upon which they would be expected to make their observations. Without such a rigorous and accepted methodology, this proposal could do more harm than good.

Following publication of the letter, the Chairman of RCUK, Professor Alan Thorpe, invited a small representative group of us to a meeting on January 12, 2010, to discuss our differences. One member of the group, Sir Tim Hunt, a Nobel Laureate, told the meeting: Impact is a weed that must be eradicated before its toxic spores sprout everywhere.

However, no replies to our questions were forthcoming, no evidence was produced to support the new initiative, and we were told that there was “not a chance” that RCUK would abandon its impact agenda. This somewhat lessthan-open–minded dialogue continued for another meeting with RCUK and also with a meeting that included the government’s Minister for Universities and Science, David Willetts. No progress was made. Impact’s toxic spores continue to sprout, therefore, and much science continues to be lost. Academics are rarely entrepreneurs. Although they might identify potential industrial opportunities, they rarely have the power to influence industrial policy. Even for industrialists, future development pathways are virtually impossible to predict accurately. To give one notorious example, in 1975 César Milstein and Georges Köhler, researchers from the Medical Research Council’s Laboratory of Molecular Biology in Cambridge, discovered monoclonal antibodies, for which they won a Nobel Prize in 1984. However, the National Research Development Corporation (NRDC),* a government body made up of senior industrialists, patent lawyers, venture capitalists, and financial experts that was specifically set up to ensure that good ideas from academic research get to markets, failed to see any value in monoclonal antibodies and declined to patent their results despite being urged to do so by Milstein and Köhler. That omission probably lost the UK taxpayer several hundreds of millions of pounds. However, as Milstein was quoted in The Economist (“Monoclonal Antibodies” 2000), they never imagined their invention would grow into a multibillion dollar market used to diagnose disease and more recently to treat it. * In 1981, the NRDC was merged with the National Enterprise Board to form the British Technology Group. It is now, as BTG plc, a private company.

12

INTRODUCTION

RAEs were ended in 2008, but not with the intention of giving academics some respite—far from it. The Funding Councils’ unrelenting and virtually unopposed campaigns to create a compliant academic sector will continue with the Research Excellence Framework (REF) in 2014.* By 2011, few university departments had not already started their extensive preparations for this latest inquisition. The specific formulae used for assessments will change, but the essential features of the RAEs will remain. As a check on the possibility that researchers have not somehow wriggled out of their responsibilities to the “Impact Gods,” the REF will retrospectively assess the impact of actual achievements using peer review yet again. Amazingly, however, applicants are allowed to count only those benefits they have induced themselves. Accidental benefits, or those brought about by others, are excluded! Retrospective is of course a huge logical advance on the research councils’ use of crystal balls. Hindsight is rarely unreliable. However, its timescale is ludicrous. Time to market for academic discoveries is often measured in decades, yet the funding councils apparently believe that creativity—that most delicate plant—will not be adversely affected by their announcement that its harvests in the future will be subject to rigorous examination after only a few years. The UK’s scientific leadership has apparently forgotten the lessons of history. To give only one example here, and one that happened on their very doorstep, Alexander Fleming (1881–1955) working at St Mary’s Hospital Medical School, Imperial College London, discovered penicillin accidentally†—an entirely chance observation. Furthermore, his research program was not designed or influenced by any considerations of the pressing problems of the day. A discovery that subsequently touched the lives of hundreds of millions stemmed from his noticing that a culture plate of staphylococci seemed to have been contaminated by a mold. Rather than throwing the plate away as a failure, as many might have done, he carefully studied it, and went on to isolate the mold and to discover that it had remarkable properties. In his own words, Fleming (1945) commented: To my generation of bacteriologists the inhibition of one microbe by another was commonplace . . . and indeed it is seldom that an observant clinical bacteriologist

* The Higher Education Funding Council of England says that the REF is a process of expert review. Higher education institutes will be invited to make submissions in 36 “units of assessment.” Submissions will be assessed by an expert subpanel for each unit of assessment, working under the guidance of four main panels. Subpanels will apply a set of generic “assessment criteria and level definitions” to produce an overall quality profile for each submission. George Orwell could not have done a better job of writing this newspeak gobbledygook. † The charismatic, courageous Hungarian-born scientist Albert Szent-Györgyi (1893–1986) who won the Nobel Prize in 1937 for his discovery of Vitamin C, among other things, once defined a discovery as “a collision between an accident and a prepared mind.” However, as he did most of his work before 1970, he obviously felt no need to add that the prepared mind’s owner must also be free to explore. Such freedom was then routinely available.

INTRODUCTION

13

can pass a week without seeing in the course of his ordinary work very definite instances of bacterial antagonism. It seems likely that this fact . . . hindered rather than helped the initiation of the study of antibiotics as we know it today. Certainly the older work on antagonism had no influence on the beginning of penicillin. It arose simply from a fortunate occurrence which happened when I was working on a purely academic bacteriological problem which had nothing to do with antagonism, or moulds, or antiseptics, or antibiotics. . . . In my first publication I might have claimed that I had come to the conclusion, as a result of serious study of the literature and deep thought, that valuable antibacterial substances were made by moulds and that I set out to investigate the problem. That would have been untrue and I preferred to tell the truth that penicillin started as a chance observation. My only merit is that I did not neglect the observation and that I pursued the subject. .  .  . To me it was of especial interest to see how a simple observation made in a hospital laboratory in London had eventually developed into a large industry and how what everyone at one time thought was merely one of my toys had by purification become the nearest approach to the ideal substance for curing many of our common infections.

How could the work that led to such a wonderful discovery have survived the predations of priorities, value for money, peer preview, relevance, Pathways to Impact, research assessment exercises, and other bureaucratic instruments for management and control as they must do today? Fleming had noticed in his contaminated plate the tip of an iceberg of ignorance, which he had been free to explore to determine its potential importance. Some scientists would not agree that Fleming deserves to be included in my imaginary Planck Club. Others (Howard Florey and Ernst Chain who shared the 1945 Nobel Prize with Fleming) were essential to bring the work to economic and social fruition, but of course Fleming’s initial observation was crucial. Nevertheless, the REF would exclude a modern Fleming from taking any credit for the impact of his discovery as he did not foresee or specifically arrange it. The responsibilities of running Research Councils and funding agencies in general are, of course, onerous and formidable, and those who accept them on our behalf rightly deserve our applause and the enhanced status they enjoy. Thousands of scientists depend on them not only for their funding but also their livelihoods. But our proxies have also accepted roles as guardians of scientific enterprise and academic freedom. Over the years, as the new policies have matured and become ever more onerous, our proxies have clearly failed to convince politicians and other providers of funds of the dreadful consequences these policies are undoubtedly having. Politicians rarely succumb to pressure and academics have few cards to play, so our proxies’ tasks are not easy, but they are concentrating on froth. For example, RCUK, a body that includes senior representatives of the UK’s Research Councils and other influential people, all with impeccable credentials, announced in an Excellence with Impact statement (RCUKa 2011), “The UK is a world leader in economic impact methodologies.”

14

INTRODUCTION

In addition to this sound-bite, pseudo commercial, another of its publications (RCUKb, 2012) gives “top tips” among other things that might help researchers fine-tune their impact statements. In another publication issued by the Engineering and Physical Sciences Research Council’s Ideas Factory (2012), they announce calls for participants to take part “in a five-day sandpit to look for innovative ways to explore Digital Personhood.” The publication helpfully explains that participants will be expected to engage constructively in discussion with each other to prepare collaborative proposals. Up to £5 million has been allocated to fund collaborative proposals arising from this sandpit, quite a treasure for what is normally a children’s play area. But the patronizing use of such terms as “sandpit” is not exceptional. The general reader may not be aware of the funding agencies’ routine use of vacuous clichés nowadays. They include: Actors; at the interfaces between disciplines; benchmarks; brainware; cartoonstrip workshops; crosscutting; generic richness; horizon scanning; pebble structures; risk appetite; stage-gateing; winning post syndrome.

None of these vague pseudo-scientific terms, or any concepts that might be behind them, was generally in use before the ∼1970 watershed. The agencies that use them without apology or deprecation would seem to indicate a preoccupation with short-term performance and a need to communicate attractively with a generation that may appreciate informality. I have worked with many young scientists, and in my experience they are as firmly convinced of the need for clarity and precision in the language they use as their seniors. However, many researchers cannot afford to ignore such fatuous publications or to be contemptuous of them as they may contain news of opportunities that may ease their funding difficulties. No other country, in proportion to its population, has a better postwar record of excellence in research as judged by the number of Nobel Prizes awarded, the world’s most coveted scientific accolades.* RCUK and the funding councils are trustees of that impressive scientific tradition, and custodians of a long record of achievement at the highest intellectual levels. Against that background, therefore, it almost defies belief that senior scientists should lend their authority to statements of such profound banality, of which those few listed in the previous paragraph are typical. Even worse, they seem determined to ensure that the academic community conforms to their absurd rules on impact, say, as nonconformity would forfeit all prospects of funding. Who are they seeking to impress? What new industries or improvements in living * Between 1945 and 1989, the UK (with less than 1% of world population) won ∼19% of Nobel Prizes in physics, chemistry, and physiology and medicine compared with 49% won by the US (with some 4.4% of world population). Between 1990 and 2010, the UK figure had fallen to ∼10% while the US had increased its proportion to 62%.

INTRODUCTION

15

standards or public acclaim do they expect that leadership “in economic impact methodologies” will create? These are rhetorical questions, of course, but I have often put them to senior Research Council staff. They usually respond by saying that as researchers are funded by public money, the public is entitled to know what benefits they are getting from their investment. They forget that this never seems to have been a concern before about 1970. The great philanthropists such as John D. Rockefeller in the early twentieth century, whose perspectives probably reflected the public’s, provided substantial funds for foundations globally, “to promote the well-being of mankind throughout the world.” Academics could hardly have a better mission statement, but its implementation depends on trust. Government treasuries trusted universities in the past. We must begin to restore that trust. Our proxies’ task is indeed difficult, but they seem to have put aside what could be a priceless weapon. In 1957, Robert Solow, a student of Hayek, concluded that technical change is responsible for about 90% of economic growth, a remarkable discovery that has since been confirmed by armies of economists, though the agreed magnitude of the contribution has fallen slightly. As new science is by far the biggest source of technical change, our proxies would now seem to have virtually tangible proof of the value of new science and the dividends that flow from scientific freedom and should therefore be able to make an irrefutable case. Are our proxies making determined efforts to do so; and, if they are, why are politicians apparently unimpressed? In the event that my assessments fall on deaf ears, what can be done? My proposed solutions to the problem are given in Chapter 12. Bearing in mind the sharp increase in the number of universities worldwide in the last few decades, it may no longer be possible to return to the idea of freedom for all within the restrictions imposed by fixed funding budgets. This unlikely option would be preferred, as researchers would not have to declare their intentions before they begin their crusades and would simply choose their problem before submitting the results of their inquiries to peer review, as they eventually must. I propose a complex mix that includes the following: Private investors such as Rockefeller, Carnegie, or Bill and Melinda Gates should consider the solution proposed in my Scientific Freedom, in which researchers apply for modest levels of funding to tackle the problems that are currently being ignored and to radically challenge the way we think about important topics. Universities should set up their own venture research initiatives (see Chapter 12) funded by the vice chancellor or chief executive from his or her private resources. Such an initiative has been adopted at University College London and has proved successful.

1 Accidents, Coincidences, and the Luck of the Draw: How Benjamin Thompson and Humphry Davy Enabled Michael Faraday to Electrify the World

Ambition’s ladder is not the same for everyone. For many, the first rung is attained only with unremitting determination. Some transitions might require radical changes in social status, and the gulf may be so huge that the chances of a successful leap will probably depend on events over which an individual has no control. Even the privileged might need help in avoiding ill-matched career expectations of well-intentioned families. Senses must constantly be on the alert for those game-changing events; if and when they come, that might give the necessary kick that allows an escape from apparently preordained orbits. We know some of the lucky ones in these respects: Isaac Newton and Michael Faraday, for example, but no doubt there are countless others on whom fortune did not smile at the right time. Newton was a farmer’s son seemingly destined to be a farmer’s father. But the Master of his school, Henry Stokes, managed to persuade Newton’s mother Hannah (his father having died three months before Isaac was born) that Newton had exceptional talent and, rather than make him take to the plow, he should go to a university. She could see no value in that. To her, protecting and cultivating the land was everything. Luckily for us all, Stokes did not give up, Hannah was reluctantly convinced, and Newton was able to take that fateful first step that would lead him to Cambridge University and on to greatness. Michael Faraday’s (1791–1867) story is more complex. The son of a blacksmith who worked when his generally poor health allowed, he was born in the English village of Newington in Surrey (now a south-London suburb); and his Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

16

HOW THOMPSON AND DAVY ENABLED FARADAY

17

only education came from a church Sunday school where he was taught to read and write. At thirteen, he began to deliver newspapers, at which age I did myself and also not for pocket money but from a need to supplement the family income. At fourteen, however, his employer took him on as an apprentice bookbinder, a 7-year stint which fortunately brought him into regular contact with the scientific literature of the day, and much more importantly, with the aficionados who read it. A few months before the end of his apprenticeship, one of his scientific friends invited him to attend a series of lectures to be given by Humphry Davy (1778–1829) at the Royal Institution in London. They not only changed his life but everyone else’s, too. The Royal Institution itself has unlikely origins, having been created in 1800 when Faraday was still a schoolboy by an American Anglophile and refugee from the American War of Independence—Benjamin Thompson (1753–1814). One of the most colorful and charismatic characters in the history of science, Thompson was born on a farm in Woburn, Massachusetts, and was indentured at 13 to an importer of dry goods, a most unlikely beginning to a distinguished scientific career. War was in the air, and he lost his job when the merchants of New England combined to ban imports in protest against the high taxes levied by Britain. However, in Thompson’s case, the loss of his livelihood turned out to be the ill wind that blew this ambitious, shrewd, brilliantly resourceful, talldark-and-handsome young man into a checkered career of the most amazing diversity. At nineteen, after moving to the greener pastures of Concord (the influential town that was later to become the state capital of New Hampshire), he married a rich and influential widow 11 years his senior. A year later, making full use of his good fortune and her powerful connections, he persuaded the Royal Governor to give him a major’s commission, no less, in the New Hampshire militia. He was not only supremely indifferent to his fellow officers’ hostility to such conspicuous preferment, even by the standards of the time, but he also seems to have been astute enough to recognize (or it might have been part of the deal with the Royal Governor) that it would help his long-term career if he agreed to spy for his royalist patrons.* The American Declaration of Independence in 1776 caught him on the losing side, but he managed to avoid the vengeance of his victorious colleagues by escaping to London. He could now practice his powers of persuasion on what was then a much bigger stage. Once again, he was almost immediately successful, and the British Government, in gratitude for services he might have exaggerated, appointed him a Minister of the Crown with responsibility for administering the colonies, a major part of which they had just lost. He had the leisure, * A century and a half later, this nonstandard scientific activity was to bring Rumford’s work to the attention of Sandborn Brown, a scientist at the Massachusetts Institute of Technology who, having served in the intelligence services during World War II, became interested in what Thompson had done and went on to write a fascinating biography of this extraordinary person. (See Brown 1962.)

18

HOW THOMPSON AND DAVY ENABLED FARADAY

therefore, to indulge the passion for science, and particularly for mechanics, that he had had since childhood. While some scientists are driven by altruism and self-sacrifice, scientists in general do not of course have a monopoly on benevolence. As for other professions, some scientists can be spurred as much by Machiavellian motivations as any others. What lasts is the quality of the contributions one is able to make: the attribution of motives can often be arbitrary and can sometimes arise from the malice of those who might have been upstaged or, as we say in the UK, “pipped at the post,” or beaten to the finishing line by the narrowest of margins. In my opinion, therefore, there is no good reason to look down on Benjamin Thompson’s work (as many have been inclined to do) because he used his considerable talents as a scientist and inventor as a means to further his personal ambitions and to win favor and influence with the great and the good. He was consummately good at all these things. Soon after he settled in London, he was elected a Fellow of the Royal Society for his work on measuring the explosive force of gunpowder in the armaments of the day. He went on to persuade King George III to promote him to a full colonel in the King’s American Dragoons and to make it a regular British regiment. Following the peace with America in 1783, he retired on half pay to travel to Bavaria, perhaps to resume his career in espionage. Whatever he did, on his return to London a few months later it was sufficient to persuade the King to knight him. Armed with this exalted rank, he went back to Bavaria and became aide-de-camp to the Elector, Karl Theodor. He was given a series of jobs, most notably the reorganization of the army and the feeding of the numerous Bavarian poor. He was triumphantly successful, and by 1791 he held simultaneously the posts of Minister of War, Minister of Police, Major General, Chamberlain of the Court, and State Counselor, anticipating in reality the wondrous range of appointments held by the fictitious Pooh-Bah of Mikado fame by almost a century, and was second in importance only to the Elector himself. In 1792, the Elector made him a Count of the Holy Roman Empire; and he took the name of Rumford, the early name for Concord, the New Hampshire town from which he launched his meteoric rise. Returning to London, he saw the need for a means by which he could promote his prolific scientific and technological discoveries, which later most famously included the smokeless fireplace and the coffee percolator, which are still in use today, and a wide range of equipment for kitchens and laundries. He also did important theoretical work on heat and combustion. He proposed that a museum for science should be set up—not only for historical study, but also as a place where, as in the museums of ancient Greece, people could go to hear about current scientific genius and, in this case, particularly his own. With typical panache, he persuaded some 60 people each to subscribe 50 guineas or more (about £3000 or some $5,000 of early twenty-first century money) to create the Royal Institution of Great Britain as:

HOW THOMPSON AND DAVY ENABLED FARADAY

19

a public institution for diffusing the knowledge of useful mechanical inventions and improvements, and for teaching by courses of philosophical lectures and experiments, the applications of science to the common purposes of life.

By 1800 this remarkable institution was up and running at 21 Albemarle Street in the opulent Mayfair district of London’s West End. Small as it was, it soon became one of the world’s most prestigious laboratories attracting the leading scientists, particularly those who valued their freedom of action. Its first professor of natural history and chemistry was Dr. Thomas Garnet, but he did not get on with the Count and lasted only a year. Humphry Davy was appointed in his place when he was only 23 and better known as a poet rather than a scientist. Samuel Taylor Coleridge was his friend and patron, and he was also friendly with William Wordsworth and Robert Southey, who would later become the Poet Laureate. These influential sponsors were fully aware of his scientific potential, however, and arranged an introduction to Count Rumford. His sparkling enthusiasm, no doubt helped by his literary turns of phrase, made an immediate impression on everyone he came across. Harold Hartley, the British physical chemist, made the following comment on Davy’s “Introductory Discourse” to his first course of lectures on chemistry given at the Institution in January of 1802 (Hartley 1972, p. 41): Here was a young man, just 24, facing a brilliant London audience to speak about a subject he had taught himself and in which his experience had been in certain specialized fields. They must have been fascinated by the broad sweep of his imagination, his visionary insight into the future of the new science and its contribution to human progress.

Rumford deserves great credit for making an inspired choice, and for recognizing Davy’s immense scientific potential despite his lack of scientific training, a failing Rumford might not necessarily have thought was a drawback, of course, as his own education had ended when he was 13! But Rumford’s creation was being seriously undermined from an unlikely source. In 1776, James Watt (the Scottish engineer) in partnership with Matthew Boulton (a business colleague) had patented his famous steam engine and had been granted a monopoly on their supply, a monopoly that expired in 1800. But Rumford’s Institution was a place where all the latest inventions—and Watt’s steam engine in particular—could be publicly explained and displayed. Not surprisingly, many patent holders were not too pleased with this open policy on their inventions. Watt and Boulton especially were very sensitive to anything that might threaten their very profitable monopoly, and they wrote to their rich and influential friends urging them to withdraw their support for the Institution (Brown 1962, p. 137). Sadly, their campaign was successful, and the Institution’s finances soon deteriorated. Rumford also felt increasingly isolated and unwelcome. By chance, he made a short visit to Paris in 1801. Even though Britain and France were at war, he was honored and respected everywhere;

20

HOW THOMPSON AND DAVY ENABLED FARADAY

indeed, he was feted by Napoleon himself! The contrasts were stark, and he found Paris irresistible. When Rumford first met Davy, therefore, he was probably able to see at once that he was a man he could entrust with his beloved Institution. Returning to London from his Paris excursion, he put his personal business in order and later in 1802 left London forever to take up residence in Paris. He could not have left the Institution in better hands, and eventually it became one of the most successful research laboratories ever built. The reasons for that success might be difficult to understand today. Aware that his lab would soon need additional funding, Davy succeeded (perhaps by tapping into his literary contacts) in repairing some of the damage done by Watt and others and made it a stunning attraction among London’s rich without compromising its scientific credentials—and this despite the competition from London’s burgeoning theatreland located close by. Mayfair’s “distinction and fashion” regularly thronged to the lab to hear the latest news from the science front. The resultant heavy traffic made it necessary in 1808 to make Albemarle Street the world’s first one-way street: coachmen were instructed to pick up and set down with horses’ heads toward Grafton Street. Mayfair is still home to the rich, but how many of them take any interest whatever in the activities of the wonderful lab on their doorstep or even know that it is there? They should (see Poster 1).

Poster 1: On the Ease with Which the Future of a Fine Laboratory Can Be Jeopardized The Royal Institution’s fortunes were always delicately balanced; but, in 1894, the great industrial chemist Ludwig Mond (1839–1909), who was born German but took British citizenship in 1880, made the fabulously generous gift of £100,000 to the Institution. He also donated the property next door to create a new lab–the Davy Faraday Research Laboratory (DFRL). In recognition of its importance, the Prince of Wales, who was shortly to become King Edward VII, officially opened it in 1896. At the opening ceremony, Mond said, “The DFRL was unique of its kind, being the only public laboratory in the world solely devoted to research in pure science.” Thanks to Mond’s intervention, the institute could continue to extend its reputation for scientific excellence. Lord Rayleigh, Director of the DFRL, in collaboration with William Ramsey, discovered the noble gas Argon for which they shared the Nobel Prize in Physics in 1904. Furthermore, four institute directors won Nobel Prizes—Sir William Bragg, Sir Lawrence Bragg, Sir Henry Dale, and Sir George Porter (later Lord Porter). Unfortunately, the standard of the Institution’s governance seems now to have fallen far short of the levels of excellence set by its scientists. The

HOW THOMPSON AND DAVY ENABLED FARADAY

21

laboratory’s record as a haven for creative researchers had been unbroken almost to the present day, but sadly, seems now to have virtually ended. About 5 years ago, the Institution decided on a £22 million refit to the eighteenth century building, and part of it was converted to an upmarket restaurant and bar. This was an astonishing decision given the Institution’s location in Mayfair, one of the world’s most hedonistic fleshpots where dozens of such restaurants can be found within a few minutes walk of the lab. Faraday, who was renowned for simple living, must be turning in his grave! As I write in 2013, the Institution is in serious financial difficulties, although an anonymous donor has given some £4.4 million that will no doubt provide some much needed respite. However, it would seem that these problems could so easily have been avoided. In a saner world, Sir Harry Kroto (see Chapter 10) would probably have been offered the directorship of the Institution in 1998, an appointment that after winning a Nobel Prize he thought would be a natural and logical progression in his career. Indeed, it would have also been in keeping with the Institution’s illustrious history. It did not happen. Kroto is one of the most honest and outspoken scientists I know. He would never have approved such a profligate policy; indeed, a small group of us under his leadership are doing what we can to help. A generous, thoughtful philanthropic and creative leadership could make a huge difference. It is not too late to restore the Institute’s superb scientific reputation and, indeed, Count Rumford’s and Mond’s visions for an Institute located at the heart of one of the world’s major capital cities. Such vision is needed today more than ever, and a revitalized Institute could play an important role in restoring the idea that scientific leadership should be based on science alone. Davy is most famous for his discovery of the highly reactive metals sodium and potassium in 1807. One of the reasons for the Institute’s appeal to the gentry was that the lecturers gave up-to-the-minute flamboyantly presented accounts of progress on the latest research. Thus, he dramatically demonstrated these new elements and their explosive reactivities to packed audiences (safety officers having not yet been invented) within weeks of seeing them for the first time himself.* The intense strain of all this and of running a high-profile laboratory on a shoestring budget made him ill, but he had captured the public’s imagination so greatly that twice-daily bulletins on his health were published by the Institute. This treatment would not be so remarkable for a modern

* Davy first publicly announced the discovery of sodium and potassium in a Bakerian Lecture at the Royal Society (then as now the Royal Society’s most prestigious lecture) in November, 1807, 6 weeks after he made the discoveries.

22

HOW THOMPSON AND DAVY ENABLED FARADAY

show-business superstar, but Davy was a working scientist! He was also the first to recognize that electrolysis—the interaction of electricity with chemical solutions—could be used to decompose chemicals into their constituent elements. In 1807, he was awarded the Napoleon Prize from the Institut de France for this work.* From today’s perspective, it is astonishing that the Davy’s award came when Britain and France were locked together in a long and bitter war and when all normal communications had been suspended. Nevertheless, science two centuries ago was so universally popular and prestigious that the Emperor Napoleon Bonaparte personally gave Davy safe conduct to travel with Faraday to Paris in 1813 to receive the award, and incidentally, to visit Davy’s old patron Count Rumford who was still living in Paris. How fashions can quickly change! Imagine, if you can, that over a century later, say, German scientists had recommended in 1942 that the “Adolf Hitler” Prize should be awarded to James Chadwick for his seminal work on the discovery of the neutron, that Hitler had guaranteed Chadwick’s safe passage to Berlin to receive it, and that Chadwick had agreed to accept! Davy was knighted in 1812, was created a baronet in 1818, and went on to become the President of the Royal Society. It has often been remarked, however, that the greatest achievement of his glittering career was the discovery of Michael Faraday, whom he first met in 1812. It was one of those close run things on which the course of history often turns. Shortly after Faraday heard Davy’s lectures on chemistry, Davy was temporarily blinded by an explosion (he was trying to combine chlorine with ammonia), and it was suggested that Faraday should be his amanuensis until he recovered. Faraday made a good impression; but, as ever, the Institute’s finances were tight; and there was no permanent vacancy for him. In 1812, opportunities for experimental scientists were thin on the ground. There were no industrial or government laboratories at that time. England had only two universities—Oxford and Cambridge—and neither had laboratories. There were several universities in Scotland—Aberdeen, Edinburgh, Glasgow, St. Andrews, and Strathclyde—but Faraday did not have qualifications that any of them would have recognized. The Royal Institution was Faraday’s only hope, but Davy could offer him only the Institute’s work on bookbinding. Faraday was about to move off into obscurity when one of Davy’s laboratory assistants was dismissed for brawling. Davy immediately offered Faraday the job. Benjamin Thompson may have been motivated by prospects of personal gain, but Faraday was driven, to use Einstein’s elegant phrase, by a hunger of the soul. He was a Sandemanian (an offshoot of the Church of Scotland) and * Napoleon took a close interest in matters scientific and, of course, had given Count Rumford an enthusiastic reception some years earlier. Napoleon had previously endowed this prize “for the best experiment that should be made in the course of each year on the galvanic fluid.” Was this a forerunner of the Nobel Prize?

HOW THOMPSON AND DAVY ENABLED FARADAY

23

deeply religious, and for him the pleasures drawn from contemplating the wonders of Nature were always in themselves sufficient reward for his labors. At the height of his considerable fame, his salary was “a hundred pounds per annum, house, coal, and candles,” and he refused to accept the many offers that would have earned him truly vast sums of money, which total was estimated by his assistant John Tyndall at some £150,000 in his day, or perhaps some $5 million today.* He also declined high civil honors and the most prestigious post in British science—the Presidency of the Royal Society. He thought that these offices might corrupt him! He was, however, the first to understand the intimate relationship between the apparently disparate phenomena of electricity and magnetism and to create the concept of the electromagnetic field. Faraday has been called the greatest experimental scientist who ever lived. Quality can never be rigorously quantified, of course, and such “geewhiz” accolades are often unhelpful. In his case, however, this accolade seems justly deserved. He made a wide range of very important discoveries; and, of these, electromagnetic induction was by far his most remarkable. Hans Christian Oersted (1777–1851), the Danish physicist, had already shown in 1820 that a flowing electric current produces a magnetic field around the wire carrying it. It was Faraday, however, who saw the full implications of this observation. In 1831, he reasoned that if a magnetic field were induced or created around a current-carrying wire, a real magnet would react with that induced field by moving within it; and in that process, some of the electrical energy in the wire would be converted into the mechanical energy of the moving magnet. Recalling his old mentor Davy’s passion for angling, he remarked after doing the experiment that he was not sure at first whether he “had pulled out a fish or a weed.” However, it did not take him long to recognize the significance of what he had done. According to one of the most famous anecdotes in science, the British parliamentary statesman William Gladstone visited the Institute as Chancellor of the Exchequer and was no doubt mystified by Faraday’s great enthusiasm for a discovery that seemed to have no obvious value. “After all,” Gladstone said, “what good is it?”† Faraday, whose total integrity would not have allowed him to apply even the lightest varnish to the truth, replied, “Why Sir, there is every probability that you will be presently able to tax it.” Once Faraday had recognized the symmetries involved, namely that a moving electric current produces a magnetic field, and conversely that a * See the Introduction written by John Tyndall in Michael Faraday’s Experimental Researches in Electricity, J. M. Dent & Sons. † The story may be apocryphal as the Royal Institute has no record of the Chancellor making an official visit. Neither do Gladstone’s extensive memoirs mention it. However, John Tyndall knew Gladstone well and he lived only a short walk away in The Albany, a fashionable set of apartments on the edge of Mayfair. The incident was related by W. E. H. Lecky in his book Democracy and Liberty published in 1899. Lecky said that “an intimate friend” of Faraday had told him about it. Faraday was an intensely private man and had few friends, but Tyndall was certainly one of them.

24

HOW THOMPSON AND DAVY ENABLED FARADAY

moving magnetic field produces an electric current, it would have been obvious to him that mechanical energy could also be converted to electrical energy. The unity of Nature at the heart of his religious beliefs was for him, therefore, reflected in the complementarity between the dynamo and the electric motor. He did not stop there. He was always suspicious of the idea generally accepted at the time that an electric current flowed like water, and he began to develop the concept of electric waves that could propagate through solid materials. His idea was that waves could move relatively easily in electrical conductors such as metals but would be more constrained in insulators such as glass. These ideas were encapsulated in his law of electromagnetic induction, which continues to be used in the analysis of the most advanced integrated circuitry to the present day. His discoveries not only electrified the world, but also inspired James Maxwell, the Scottish theoretical physicist, to produce his electromagnetic field theory, which in turn paved the way for Max Planck, Albert Einstein, and others to lay the foundations of modern physics. The vagaries of accident and coincidence are not confined to the sciences, of course. But of all humanity’s endeavors, science is unique in that its points of reference must eventually have absolute foundations, by which I mean foundations that are completely without any human control or influence. A scientist’s purpose is to increase our understanding of the universe. As, according to current understanding, the universe would seem to be infinitely complex, we should not expect that our comprehension of any aspect of it can ever be complete, and we should regard every new discovery as merely the best that can be done for the time being. Indeed, every scientist has a duty to review constantly and to challenge current knowledge as they know it; thereby, they may discover profitable new lines of inquiry. Complications also arise because we are, of course, immersed in that universe; but any new scientific facts or laws we might uncover must be, within their declared limitations, as amenable to application here on Earth as on a planet near Alpha Centuri or any other remote corner of the galaxy, whether or not it contains planets. There is no such thing as parochial science. Furthermore, if researchers do not take every possible step to eliminate all conceivable subjective elements or indeed any artificial constraints from their work, then it cannot accurately be called science. Funding agencies today regularly compromise in these respects, and working scientists have little choice but to acquiesce. All scientific research is, of course, subject to human controls and influences. Their effects can be substantial. For many years, organized religion subjected scientists, or would-be scientists, to serious restrictions. Indeed, as is well known, the Italian Giordano Bruno was burned at the stake by the Inquisition in 1600 for persisting in his teachings that the planet Earth might not be unique and that there may be many other Earth-like worlds out there indistinguishable from our own. Among his many perceived offenses was the holding of heretical views contrary to those of the Roman Catholic Church. Much later, Charles Darwin’s evolutionary theories were also initially perceived to be outside the

HOW THOMPSON AND DAVY ENABLED FARADAY

25

Church’s faith; even today they are regarded with intense suspicion, to say the least, in some quarters. More generally, such questions as whether researchers should be required to serve national interests, and whether the public should have a role in research selection, are becoming increasingly important. Almost every scientist needs funding, and their demands for funds today are much greater than the supply, as indeed they are for practitioners in all fields of human endeavor. Not surprisingly, their most important patrons— politicians—insist on obtaining the best value for taxpayers’ money and seek maximum national benefit from whatever funding they authorize. In general, many funding-agency executives are either political appointees or are strongly restricted in what they may do by charters or terms of reference drawn up by politicians. Almost always, these executives exercise their judgments on which research should be funded in consultation with acknowledged experts in the field–the researchers’ peers—an approach generally agreed to be the fairest way to distribute scarce resources. And so it is. It not only works well in virtually every field, but it also has a good record in the sciences for almost all types of research, especially in the well-established mainstreams. Unfortunately, however, it contains a small but very serious flaw, as pragmatism often does. Such faults may be tolerable for most human pursuits. Colleagues might not generally accept an individual historian’s views on the past, say; but once considerations of logic, reason, and style and the accuracy of agreed facts have been taken into account, one person’s historical perspective is as good as another’s. There can be no final arbiter. That is not the case in science. The usual ways of getting consensus on what is scientifically important or unimportant cannot, of course, include direct inputs from Nature. They must be inferred, and that is usually done through consultation. That is unfortunate because in science it is sometimes the case, and especially outside the well-trodden paths of the mainstreams, that the opinions of a thousand confident colleagues can be wrong when one doubtful scientist is right, though it may take time before everyone agrees. Nature is always the final arbiter. If one does not have Nature’s approval expressed repeatedly through the results of experiment, one has nothing. I do not know of a funding agency anywhere today that takes full cognizance of these simple facts, omissions that would seem to place them among the twentieth century’s most important unsolved problems. Worse still, they are almost invariably ignored, and one often hears such remarks as, “Selection by one’s peers is like democracy: it is not the best but it is the best we know.” But as every scientist knows, science is not democratic. Richard Feynman, one of the twentieth century’s most creative, larger-than-life scientists, famously expressed it very well in 1966 when addressing a gathering of teachers in New York, “Science is the belief in the ignorance of experts.” Unfortunately, funding agencies today seem to believe that scientific experts are infallible. Funding agencies might also recall another famous quotation, “It ain’t what you know that gets you into trouble but what you know that

26

HOW THOMPSON AND DAVY ENABLED FARADAY

ain’t so.” It is often attributed to either Mark Twain or Will Rogers, but there seems no evidence that either of these sages said it. However, I doubt if there is a working scientist who is not aware of the pitfalls its author, whoever he is, so pithily warns against. What a pity that those who make the transition to the corridors of scientific power quickly seem to forget it. In the following chapters, I hope to explain why such lapses are causing serious problems. They set out to describe the discoveries that we can now see affect the lives of everyone, although at the time that was unsuspected. They were all the expressions of what might be done if everyone were completely free at every stage of their journeys into the unknown. This not only involved the details of the final discoveries, but also every intermediate stage along that road, too. For example, Niels Bohr would prefer not to do a literature search before embarking on an experiment, but to prepare his mind by discussing elements of nuclear physics with people thinking about similar problems. Such an approach might not work today, as it could raise conflicts of interest.

2 Science, Technology, and Economic Growth: Can Their Magical Relationships Be Controlled?

It may seem surprising, but in many respects our world is mainly unchanging. To paint a simple panoramic picture, we now know that the Earth and every tangible thing in it consist of atoms and molecules weakly or strongly coupled together that are virtually the same today as they were when our first human ancestors took to the stage a few million years ago. Effectively, therefore, atoms and molecules are immortal, and we are all the products of recycling on the grandest scale. Thus, the elemental ingredients of our bodies and everything we can see, use, or consume are exactly the same today as they were in the fourteenth century, say, as they are for every animal and every living creature; only the details of their arrangement have changed. The Earth as a physical system has been far from unchanging, of course. Continents move and collide, climate is subject to violent excursions as is the protection provided by its magnetic shield against extraterrestrial bombardment, and each species of the vast spectrum of life on Earth must be able to adjust to the effects of these and other upheavals or it becomes extinct. However, the processes of evolution create species designed to take advantage of specific environmental niches (availability of food, predator-prey relationships, etc.). The behavior of each species is adjusted for survival in that niche, and it is therefore largely unchanging as long as that niche continues to exist. Indeed, the extensive evidence is that animals today live and die much as they have done for millennia. Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

27

28

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

Our species—Homo sapiens—is different in that the niche we have exploited to our considerable advantage is primarily intellectual in origin.* It may even be misleading to use the word “niche” in this context as it usually describes a well-defined region; in our case, being intellectual, it is probably boundless. In addition, we are currently its only occupant, and the freedom it offers has led, in effect, to our refusal to accept Nature’s provisions in either the quantity or the quality she provides. Our agricultural prowess, among many others, transformed our prospects, and our exceptional creativity naturally led to dominance of the animal kingdom; indeed, we have evolved into people and do not think of ourselves as animals at all. Our intellectual abilities have also continued to mature, and by the beginnings of recorded history a few millennia ago they had led to wide ranges of social and technological achievements that transformed our ways of life from those of our earliest ancestors.† Even if our civilization were reduced to rubble by some cataclysm that blasted us back to the proverbial Stone Age, there would still be an almost infinite gulf between the ways in which the human survivors might live and the ways of the animals. Yet despite all this, human life remained a constant struggle. Long-term population increases, which are usually good indicators of well-being, barely overcame the ravages of famine and disease for centuries. Table 2, taken from TABLE 2.  Annual Average Compound Population Growth Rates for the Past 2000 Years Years Western Europe, % Western offshoots, % Eastern Europe and former USSR, % World, %

0–1000

1000–1820

1820–1998

0.00 0.05 0.05 0.02

0.20 0.21 0.23 0.17

0.60 1.91 0.85 0.98

Source:  Maddison (2006). Maddison defines western offshoots as the United States, Canada, Australia, and New Zealand.

* In Braben (2004), I discuss humanity’s occupation of the intellectual domain and the roles played by creativity and dissent in our remarkable transition. † The American Lewis Morgan (1818–1881) trained as a lawyer, following which in the 1840s he went to live among the Iroquois Indians and adopted their lifestyle. Later he became interested in anthropology and wrote his famous book, Ancient Society, in which he says (Morgan 1877, p. 34–35): Humanity’s . . . achievements as a barbarian . . . transcend, in relative importance, all his subsequent works. Starting as humanity did of course at the bottom of the scale, those achievements up to the time of the iron age included poetry; mythology; temple architecture; knowledge of cereals; cities compassed by walls of stone, with battlements towers and gates; use of marble in architecture; ship building with planks and probably with nails; the wagon and the chariot; metallic-plate armor; iron sword; wine; potters’ wheel; mill for grinding grain; woven fabrics; axe and spade; hammer, anvil and bellows; . . . plus a large variety of legal and social achievements, including marriage and the family.

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

29

Angus Maddison’s magnificent twentieth-century work, gives estimated populations for three major blocs and for the world over the past 2 millennia (Maddison 2006). They show that for much of that time populations barely changed over a human lifespan, a fact that tends to confirm that humanity’s lot was generally unhappy for very long periods. It is not easy to understand why this should be so as our species seems never to have lacked individual creative brilliance to spur our progression. The superb cave paintings from Altamira, for example, which would grace any modern art gallery, are at least 25,000 years old; Homer wrote his masterpieces around 800 BC; Roman times saw such geniuses as Cicero; Bede wrote timeless works in eighth-century England, followed by Roger Bacon in the thirteenth century, William of Occam in the fourteenth, and Geoffrey Chaucer in the fifteenth, to name only a very few, all followed by the magnificent Renaissance, of course. However, these flashes of genius had only modest impact on the prosperity of the ordinary man and woman. The fourteenth century, for example, was dominated by widespread superstition and willful ignorance. Responses to the mysterious Great Plague or the Black Death are typical of the period. The “fatal corruption of the air,” as it was called at the time, apparently arrived out of nowhere and wiped out some 30% of Europeans in only a few years. No authority and indeed no person had the faintest idea what it really was or what should be done to escape the unremitting terror of what is now known to be a bacterial infection.* Even without plague and other waves of infectious malevolence with which Nature bathes humanity from time to time, life for ordinary people was punctuated by relentless wars of attrition merely to survive. Thus Thomas Hobbes (1651) could write in the seventeenth century: . . . and worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.

The indomitable human spirit cannot be crushed for long, of course, but humanity’s generally fearful and uncertain outlook had prevailed for millennia. There seemed to be no escape. That situation slowly began to improve in the sixteenth century. Its origin was again exclusively intellectual as we started to find ways of applying our growing comprehension of the world systematically to produce a common advantage. Francis Bacon (1561–1626) showed that scientific research could be much more than airy-fairy philosophy; it had the potential to create tangible outcomes of real value. The Industrial Revolution, which started in Britain about a century later, endorsed his view. The takeoff to steady and sustained growth started in earnest around 1820–1850. Table 3 indicates something of

* The prolific British journalist, Simon Jenkins, once remarked that the Black Death was blamed on everything bar its cause.

30

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

TABLE 3.  Rates of Real Terms Growth in Gross Domestic Product per Capita Years

0–1000

1000–1820

1820–1998

Western Europe, % Western offshoots, % Eastern Europe and former USSR, % World, %

−0.01 0.00 0.00 0.01

0.14 0.13 0.06 0.05

1.51 1.75 1.06 1.21

Source:  Maddison (2006). Maddison defines Western Offshoots as the United States, Canada, Australia and New Zealand.

those changes in gross domestic product (GDP) over the past 2000 years, again taken from Maddison’s work. In particular, progress over the past 2 has been so spectacular that it might be natural to think it cannot last. However, ebbs and flows excepting, I will argue that there is no reason to suppose that our progression, bearing in mind its origin, should not continue indefinitely. Life for most people at the beginning of the nineteenth century was still based on unremitting hard labor, but our increasing ability to apply the fruits of research and inquiry to the improvement of material well-being led to the creation of a priceless new paradigm for humanity. Pioneers were either allowed freedom to explore and develop or at least were not positively discouraged from doing so. British arrangements for protecting intellectual property rights led the world, a priceless advantage that encouraged her people to be more resourceful and inventive than others, and eventually, led to British technology leading the world. However, Britons had no monopoly on insight. From our modern perspective, we now know that trying to develop technologies in fields where there is little understanding of Nature’s wonderful ways is like running with one’s eyes shut. Vision improves when scientific inquiry reveals clues on the best paths to follow; but inevitably, it was a long time before this option became routinely available. The concept of research as a systematic discipline had to be developed. The sources of technology could then be expanded progressively from an exclusive dependence on the inspired guesswork of isolated individuals to organized searches for solutions to specific problems. Also, within a few decades the industrial revolution had spread rapidly throughout Europe and North America. Its steady growth has continued—with occasional explosive surges driven by the imperatives of war—up to the present day. Humanity would therefore seem to have stumbled upon magical formulae for finding virtually inexhaustible fountains of potential wealth for everyone, though as ever, of course, some people always seem to benefit excessively more than the average; all we need is the wit to recognize and understand the reasons for our good fortune. A big problem initially is that a scientific discovery’s tangible significance may be very difficult to recognize. A scientist’s flashes of inspiration may uncover new ways of thinking about an important

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

31

problem, but different types of pioneering ingenuity are required to turn them into profitable opportunities and bring them to fruition. If we can create the conditions under which the partnerships between creators and developers can flourish, some developments can be more valuable and durable than, say, Saudi oil, as the discovery of the maser–laser turned out to be (see Chapter 7), and can boost the common good fortune. As we gradually discovered how to manage these complex and changing relationships, we began for the first time in history to emerge from stagnation and suffering, allowing sustained economic growth inexorably to exert its life-enhancing magic. The British industrial revolution initiated processes of social change that eventually spread far beyond its shores and, for the industrialized parts of the world at least, have progressively transformed standards of living. Unfortunately, that progression not only seems to have slowed down recently, but we might also be heading for a period of stagnation. Hopefully, we will come out of it, but hope has never been a good recipe for success. So why has our wonderful magic apparently lost its mojo? Are we beginning to forget how we came to make such dramatic progress? We can begin to answer these questions only when we fully understand what that magic actually is. As every scientist knows, of course, there is no such thing as magic, although possessing a sense of wonder may well be essential. For most scientists, we either understand a facet of Nature’s domains—global warming, say; a specific human disease; why gravity is such a strange and anomalous force; the possible existence of dark energy and matter—or we do not. If we do understand, naysayers can sooner or later be blasted out of the water, and we can count on Nature, through the medium of carefully designed experimentation, to help us. In the absence of understanding, however, anyone can claim anything and no one has an unqualified right to contradict. Magic and other explanations depending on belief, superstition, prejudice, or whatever might equally be valid. There is no question that actual, tangible, real living conditions for most people have improved and that those improvements would probably have exceeded even the wildest dreams of those who lived a couple of centuries ago. Furthermore, there seems to be no good reason why those improvements should not continue. However, if it is not down to magic, what constitutes economic growth and from where does it come? For many years, economists assumed that growth stemmed from the triple alliance of labor, capital, and resources and that growth would follow as populations and trade increased and explorers discovered new coal mines, gold fields, oil wells, and other tangible stores of Nature’s bounty, which obvious sources of wealth could be tapped by the commercially minded. Moses Abramovitz (1912–2000) from the University of Stanford is often cited as one of the first to search systematically for the actual sources of growth over an extended period. He found that between the decade of 1869–1878 and the decade of 1944–1953, net US national product per capita in real terms approximately quadrupled while population more than tripled (Abramovitz 1956). His

32

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

calculations seemed to show that over some eight decades, the combined per capita input of labor and capital accounted for only 10% of the growth of net output per capita. Thus, a missing source—“total factor productivity” as it came to be called—accounted for almost all growth. Ironically, economists had expected that any differences between estimates of growth based on contributions from the triple alliance and actual data would always be small, and accordingly had named it “the residual,” which perhaps also indicates that they did not really believe that any differences could be enormous; if they were, they might have said, we would already have noticed them. At roughly the same time, the young Robert Solow, born in 1924, also turned his attention to growth shortly after leaving wartime service in the U.S. Army, taking a PhD at Harvard as a student of Hayek and accepting an assistant professorship at the Massachusetts Institute of Technology in 1950. At that time, the growth problem was not regarded as urgent. For most experts, the triple alliance provided sufficient explanation, but Solow thought they were wrong. One of his many concerns was that current thinking could not explain why the global economy had not been shaken to bits by the catastrophes of the previous 20 years or so—the world’s worst depression followed by its most terrible war. Taking full advantage of the freedom his new post offered, he examined the US economy between 1909 and 1949 and tried to reconcile data on the observed growth with increases in capital and other resources. To his surprise, he was not able to do so. Introducing what he called a “new wrinkle,” he concluded that the major source of growth—actually seven-eighths of it, no less—came from technical change (Solow 1957). Thus, the triple alliance of labor, capital, and resources, which for decades and indeed for as long as economics had been studied, had been assumed to be humanity’s sole and unquestioned benefactors, actually played only very minor roles. Solow won the Nobel Prize for Economics* in 1987 for his astonishing discovery. It has since been confirmed by many workers, most notably by Edward Denison† and by Angus Maddison’s studies sustained over many years (Maddison 1998) as Solow pointed out in his Nobel Prize Lecture. No one knew anything about this in 1820, one of the benchmark years selected by Maddison. However, led first by ingenuity and technology, national economies slowly moved from the relative torpor of centuries to steady

* Strictly speaking, the Prize in Economic Sciences is not a Nobel Prize. In 1968, Sweden’s central bank instituted “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel,” and it has since been awarded by the Royal Swedish Academy of Sciences according to the same principles as for the Nobel Prizes that have been awarded since 1901. The Prize was first awarded in 1969. † Denison, 1985. Denison’s study of American growth between 1929 and 1982 concluded that education accounted for 30% of the increase in output per worker and the advance of knowledge accounted for 64%.

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

33

decade-by-decade growth. Science made technology more reliable and also spectacularly extended the range of its domains. Standards of living increased beyond recognition from what they had been. In 1850, for example, an English worker had to labor for more than 2 hours to earn the cost of a single loaf of bread. Today, even for people earning minimum wage their daily bread can be secured in only 10 minutes or so. Furthermore, despite the current economic crisis, one of the most severe in history, life for most people in the industrialized nations today—the healthy and employed, for example—is generally tolerable. This simple fact must be a nightmare for politicians because the crisis demands action and change. However, the writing is on the wall. Youth unemployment, a terrible combination of words that should be an anathema to any civilization because they indicate the presence of corrosion at its very foundations, is at a record high level and rising. Many fear for the future of their jobs and homes, of education, of health services, and of pensions. Uncertainty is everywhere. Most people work hard and can rightly say that they fully deserve what pleasures they can find. However, until relatively recently, working hard was never enough; it merely qualified the victim for more suffering, as sadly it still does in many parts of the world. In 1850s Britain, the working week was around 70–100 hours; conditions were often appalling; there was child labor; and the concept of disposable income existed only for the rich. Recognition of science’s wider importance began to change all that, and not only in Britain, of course. However, capitalism—the ideology that has overseen all industrial and commercial enterprise for centuries—demands incessant compound growth. Amazingly, for the past century or more and despite crises and wars, science and technology have risen to those demands. But we have a tiger by the tail. If we cannot find a viable alternative to capitalism that offers reasonable living standards for the majority, we must ensure that we have an adequate flow of new sciences to appease the Gods of Growth. If we do nothing, we will soon fall back to the grim conditions with which humanity has had to cope throughout virtually all its existence. The signs are not good. Despite its many flaws, capitalism would seem to be society’s only proven ideological option at present. As for science, its governance is now dominated by the pursuit of nitpicking efficiency improvements and the elimination of risk. We are slowly strangling ourselves with bureaucracy and red tape. Maddison (1926–2010) was born in Newcastle-on-Tyne, England, was Emeritus Professor at the University of Groningen in the Netherlands, and was Honorary Fellow of Selwyn College, Cambridge. He was educated at Cambridge, McGill, and Johns Hopkins universities before teaching at the University of St. Andrews in Scotland. He had a long association with the Organization for Economic Co-operation and Development (OECD), the publisher of many of his books, beginning before the organization was formally established in 1961. Maddison identifies the following four main causal influences on growth (Maddison 1998, p. 33):

34

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH • • • •

Technological progress. Accumulation of physical capital. Improvement of human skills, education, organizing ability. Closer integration of individual national economies through trade in goods and services, investment, intellectual, and entrepreneurial interaction.

He also mentions that the literature includes three other elements: economies of scale, structural change, and the relative scarcity or abundance of natural resources. All these potential contributions interact with each other, making it difficult to identify the role played by each one. Notwithstanding all this, he concludes that technical progress has been the most fundamental element of change since 1820. However, he makes a careful distinction between leader and follower countries in analyzing the technical progress stakes (see Chapter 12). During the period 1820–1913, the UK was clearly the leader; and although other European countries were closer followers than most of the world, UK leadership was substantial. As Maddison points out, Japan was the archetypal follower country and indeed in 1867 radically changed its political and institutional infrastructures in order to promote technical transfer and allow adaptation and “catch-up.” From 1913 to 1973, the US made faster progress than the UK ever achieved, which is a major reason that the world economy grew faster in the twentieth century than in the nineteenth. Between 1950 and 1992, the margin of US productivity leadership was substantially eroded as Europe drew closer to achieving US levels of productivity. However, Maddison points to a global slowdown in growth over the last 20 years of his study, that is, from about 1972 to 1992. Maddison mentions a most gloomy possible explanation for the slowdown— that we have reached the point where the easier inventions have been exploited, there is less left to discover, and the unknown has become harder to penetrate. While expressing his skepticism of these propositions, he concludes that they cannot be ignored. That must be true, of course, but the idea that all of the easy discoveries have been made is as old as the hills; it has always been proved wrong; and, indeed, it is ridiculous. It is possible that those who invented the wheel might have thought that nothing so important could ever be done again. The arrival of the bow and arrow, and the discovery of how to make steel might have induced similar musings—and so on, ad infinitum. Most, if not all, inventions or discoveries can be made to look easy after the event, but few have been other than painfully conceived. Important discoveries require substantial levels of dissent at some stage, that is, a refusal to accept things as they actually are or are thought to be, thereby creating points of departure from what had gone before. Artists or musicians have similar problems. For instance, after hearing a Beethoven symphony, one may think that such perfection can never be matched; but musicians continue to compose. Who would dream of saying that all the great music has already been written?

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

35

In 1776, Adam Smith published his Inquiry into the Nature and Causes of the Wealth of Nations, which was, in effect, the first text ever written on economic growth. He argued passionately for free trade. This was a very heretical view at the time as the conventional wisdom strongly held that the total volume of trade was fixed by the supply of gold and silver and in modern parlance was, in effect, a zero-sum game. In what became one of the most quoted passages in economics, he wrote (in Book 4, Chapter 2): Every individual . . . neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.

As I argued in my Scientific Freedom, Smith’s invisible hand proposes a powerful feedback mechanism for promoting growth and prosperity. Its functional details may not be fully understood, but his reference to “invisibility” implies that understanding might be unnecessary; we should merely sit back and allow it to work. That begs the question, of course, but my interpretation of Smith’s meaning is that individuals should be free to form their own judgments on what is important and to do whatever they believe is necessary to bring their own ideas to fruition. In another of his books, The Theory of Model Sentiments, he writes about altruism and a person’s derivation of pleasure from another’s happiness though, as he puts it, “He himself derives nothing from it”—which is perhaps an inverted form of Schadenfreude. But if a person can see that happiness is stemming from his actions, whether or not he personally benefits, that person would have the pleasure of achievement. That in itself may be sufficient reward. Following Robert Solow’s transformative discovery, we now know that technical change is the dominant source of long-term economic growth; but as Simon Kuznets (the winner of the 1971 Nobel Prize in Economics) has pointed out, growth also brings unexpected results, positive as well as negative. The industrialization of Britain, for instance, led to mass transfers of people from the countryside to overcrowded, unsanitary, and therefore disease-prone cities. Growth made it possible for these problems to be alleviated eventually, of course, but such consequences cannot always be anticipated or be wholly ruled out. Nevertheless, a world without growth would probably mean reversion to pre-seventeenth-century conditions and endless cycles of deprivation and despair for the majority. Warts and all, fostering growth would seem to offer a better alternative than that. But how can we achieve it? The global expenditure on the search for new and improved technologies now exceeds $1 trillion annually and so our commitment can hardly be faulted. However, as I pointed out in the Introduction, the search is now far from being unbiased and

36

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

open-minded. Increasingly, our institutions have preconceived ideas about what they are looking for. The future has suddenly become predictable. The pursuit of efficiency has always been a priority, but only in the past few decades, following revolutionary developments in computing and communications, has it been possible to implement that pursuit rigorously and relentlessly. Institutions now revel in the powers of their new toys. We now live in an age in which efficiency—that is, perceptions of efficiency—is paramount, particularly in resource allocation and use. But efficiency’s relationship with creativity is not understood. The length of Maddison’s list and his hints on their interrelationships are indicators of the scale of growth’s complexity. Solow and others were surprised to discover that “the residual” is the dominant source of growth and that the once-supposedly-supreme triple alliance of labor, capital, and resources plays only a relatively minor role. It might be concluded, therefore, that current understanding is wrong or at least seriously flawed. What should be done, therefore? My “new wrinkle” would be to suggest a return to first principles. We know that humanity has actually succeeded in lifting itself out of what for millennia must have seemed to be a permanent condition of stagnation and suffering. There was no central authority in any country in 1820, say, to direct and coordinate that transformation, to create the sustained economic growth that lasted almost 2 centuries, or to stimulate the avalanches of scientific and technological discovery on which all this was based. We allowed a thousand blooms to blossom if they could, and many did. I suggest, therefore, that our first step in restoring creativity to its rightful and proven place is to recognize that attempts made to manage this priceless gift usually result in its curtailment. It is often necessary to direct and coordinate efforts toward the achievement of specific goals. Industry must do so to maintain competitiveness, as must governments to meet their social and other obligations; and high levels of creativity are required for success. It will always be necessary to control such enterprises, and current techniques for managing them are probably adequate. But as I will explain in the following chapters, few organizations know how to manage unbounded creativity. Many of the apparently intractable problems facing us today will probably stay that way unless we can find ways of applying the full unmitigated powers of the human intellect against them, and that will invariably mean giving those intellects complete freedom. Under today’s rules, that is a forlorn hope. The next step should be to recognize that the most recent falls in growth that Maddison discussed coincide with the introduction over the past few decades of the new policies for managing academic research, policies that are virtually the same everywhere. As academic research was the source of almost all the 500 or so twentieth-century Planck-Club discoveries, it should be astonishing that this correlation has not already been noted.

SCIENCE, TECHNOLOGY, AND ECONOMIC GROWTH

37

Against this background, in the following chapters I will outline the research that led to a dozen or so of the century’s most radical and transformational scientific discoveries. Discovery is a very delicate plant. I will also try to describe the background against which these discoveries were made; and I will try to show that, if today’s ubiquitous rules and regulations had been applied to those discoveries, we would probably not now be enjoying the huge intellectual and economic benefits they created.

3 Max Planck: A Reluctant Revolutionary with a Hunger of the Soul

Albert Einstein, in his Preface to Max Planck’s book (1933), Where Is Science Going?, says: Many kinds of men devote themselves to science, and not all for the sake of science herself. There are some who come into her temple because it offers them the opportunity to display their particular talents. To this class of men science is a kind of sport in the practice of which they exult, just as an athlete exults in the exercise of his muscular prowess. There is another class of men who come into the temple to make an offering of their brain pulp in the hope of securing a profitable return. These men are scientists only by the chance of some circumstance which offered itself when making a choice of career. If the attending circumstance had been different, they might have become politicians or captains of business. Should an angel of God descend and drive from the temple of science all those who belong to the categories I have mentioned, I fear the temple would be nearly emptied. But a few worshippers would still remain—some from former times and some from ours. To these latter belongs our Planck. And that is why we love him. .  .  . The state of mind which furnishes the driving power here resembles that of the devotee or the lover. The long-sustained effort is not inspired by any set plan or purpose. Its inspiration arises from a hunger of the soul. . . . [Planck’s] work has given one of the most powerful of all impulses to the progress of science. His ideas will be effective as long as physical science lasts. And I hope that the example which his personal life affords will not be less effective with later generations of scientists. Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

38

MAX PLANCK

39

This deeply respectful tribute from one of the world’s greatest scientists emphasizes the monumental importance of Max Planck’s seminal work. Einstein also expresses admiration for Planck’s unwavering dedication to exploring lines of inquiry that we now know often came close to overcoming him with waves of self-doubt. To make matters worse, his colleagues did not even agree that the problem he was tackling was important! But the need to satisfy his “hunger of the soul” prevailed, and we should all be grateful. One shudders to think what would have happened had he been exposed to the full rigors of having to justify his crusade according to today’s rules when he was setting out, as would be required from a modern Planck. Max Planck’s early background could hardly have been more propitious for a distinguished academic career. Born in 1858 in Kiel (which was then part of Denmark) to professional parents of German ancestry, his father, John Julius Wilhelm von Planck, was professor of constitutional law at the university there. In 1867, he accepted a prestigious Chair at the University of Munich, the capital of Bavaria and a city noted for its culture. The family was very influential; Max’s uncle Gottlieb Planck was one of the drafters of the German Civil Code—the body of law that eventually (in 1900) unified the local State laws that had applied before Germany was created as a country in 1871. Not surprisingly, therefore, the young Planck received an excellent education in languages, mathematics, history, and music; and although he did well, he did not excel in any of them. However, perhaps inspired by his mathematics teacher, Hermann Müller, who had told him about the newly discovered law of energy conservation, he chose physics. That choice had to survive its first test when he presented himself at the University of Munich in 1874, as Philipp von Jolly, the professor of physics there, advised him that prospects in physics were meager; that all the major discoveries had already been made so he would be wise to choose something else. Coming only a decade or so after James Clerk Maxwell (1831–1879) had published his seminal papers unifying the apparently unrelated fields of electricity and magnetism, the advice was understandable. Maxwell’s work was a tour de force partly inspired by Michael Faraday who years before had wondered whether the magnetic force was instantaneously transmitted or not. Maxwell’s equations had predicted the existence of electromagnetic waves; and in one of the greatest “Eureka!” moments in science, he was astonished to discover when he plugged in the values of the various constants involved that the equations also predicted that those waves should travel with exactly the same velocity as light. Thus, he had shown that light itself is an electromagnetic wave. Perhaps aware of the heroic creativity involved in such work, Planck told von Jolly that he had no wish to make major discoveries. He wanted to understand and deepen knowledge of the known fundamentals of the field; indeed, his search for “the absolute and the invariant” as he called them was to determine the rest of his life. Graduating in 1877, he spent a year in Berlin working with Herman von Helmholtz and Gustav Kirchhoff, the leading physicists at the time, and went

40

MAX PLANCK

on to write his doctoral thesis in thermodynamics, a most unfashionable choice of field. As he relates in his autobiography (Planck 1950), his thesis was ignored by his colleagues. In particular, he thought Helmholtz had not read it; and Kirchhoff had expressed only disapproval. The university had accepted it, he thought, only because his superiors had been impressed by his practical work in the laboratory. But Planck was a theorist and never worked in a lab again. It is virtually inconceivable that a graduate student would escape such indifference in today’s highly competitive world. Planck was much luckier and his career did have a next step, which was to prepare his habilitation thesis—a work intended to show a candidate’s originality and grasp of the subject, success in which would allow entry to the ranks of academia. Working on thermodynamics, and particularly on the second law, he leapt this crucial hurdle just over a year later when he was only 22, a remarkably young age to make this important transition. Unfortunately, these ranks were also unpaid. He had no choice, therefore, but to continue to live at home with his parents, and, as he observed, “to be a financial drain on them”—not for long, however; and thanks to strong backing from Helmholtz—then widely dubbed as the “Reich Chancellor of Science”—on whom Planck’s originality had clearly made a strong impression, he was appointed professor of theoretical physics at the University of Berlin in 1888 at the still remarkably young age of 30. He stayed there until he retired almost half a century later. Thermodynamics is the rather misleading name given to the study of the ways Nature deals with energy in its many forms—how it is propagated, transformed, and degraded in solids, liquid, and gases, for example. The so-called first law asserts the conservation of energy, a discovery that was probably responsible for setting Planck on the road to becoming a physicist. As a young man, he had been impressed by this law’s absolute certainty and how, for

Figure 1 Max Planck with his fiancée, Marie Merck, in 1886. (Reproduced by permission of the Archives of the Max Planck Society, Berlin.)

MAX PLANCK

41

example, the effort required to lift a brick to the roof of a building was never forgotten or lost but was stored “undiminished and latent” independently of all human agency, until one day the falling brick might deliver that effort in concentrated form to the head of an unfortunate passerby. Heat is, of course, a form of energy, and so it might be thought that the passage of heat from a hotter to a cooler body might be analogous to a weight dropping under gravity. There are major differences, however; and Planck devoted his PhD thesis, and indeed the rest of his life, to explaining and elaborating on their subtle differences. The second law of thermodynamics, first expressed by the French military engineer Sadi Carnot (1796–1832) in 1824 when he was only 28, stated that heat cannot flow from a cold to a hot body or, expressed in more modern language, that the entropy of a system always increases with the passage of time or stays the same. Carnot had seen military service in France’s long war against Great Britain and was understandably embittered by a defeat that he attributed largely to France’s technical inferiority. The Industrial Revolution began in England, of course, and the steam engine was one of its greatest triumphs. Exports to France, banned during the war, resumed soon after. Carnot wondered why France could not make its own, particularly as so many British developments had come from uneducated people. George Stephenson, most notably, could not read until he was 18, but against all the odds he went on to design and build some 14 years later the world’s first mobile steam engine, a locomotive. He called it “Blücher,” after the Prussian general who played a decisive role in the Battle of Waterloo, a choice that no doubt further annoyed the chauvinistic Carnot. In sharp contrast, Carnot had a privileged background as the eldest son of a senior French Revolutionary government minister and was a graduate of the prestigious Paris École Polytechnique. Carnot wondered why some steam engines with specific designs were successful and others less so. What were the reasons for these differences? Steam engines—that is, static steam engines—had been developed extensively by 1815; however, no one had studied the sciences on which they were based. Progress had come entirely from inspired trial and error, as has usually been the case throughout the history of technology. Carnot directed his attention to the essence of the processes of power generation, ignoring all mechanical and other details. Indeed, he wanted to extend his theoretical studies not only to all steam engines, but also to every heat engine that could be imagined. These are seriously significant steps for an engineer to contemplate, and one wonders what the response of a modern committee might be if a young engineer today sought their permission and funding to stray so far from the beaten tracks of established practical expertise. However, as Carnot saw it, power is produced when the heat “drops” from the higher temperature of the boiler to the lower temperature of the condenser. At that time, heat was presumed to be a gas that could be neither created nor destroyed; this was an expression of the caloric theory, later shown to be erroneous (by the American, Benjamin

42

MAX PLANCK

Thompson, whom we met in Chapter 1, among others) and on which Carnot had serious doubts. The law of energy conservation had not yet been formally stated. His theoretical considerations led him to make the prediction that the power generated depended only on the temperature difference between the boiler and the condenser divided by the boiler temperature; the choice of working substance—steam, alcohol, gas, or whatever—was of no importance.* Sadly, Carnot died of cholera only a few years later, and it was left to others— notably William Thomson (1824–1907), later Lord Kelvin, and Rudolph Clausius (1822–1888)—to continue his work. Thus, the so-called “second law” of thermodynamics was discovered before the first, and one of the greatest discoveries in science was made by an engineer! To briefly return to Planck’s preoccupation, the potential energy of a falling rooftop brick is converted virtually instantaneously with almost 100% efficiency (ignoring air resistance) into damage to the unfortunate victim’s head and anything else it may hit. What if we consider heat energy? Can heat energy be converted as efficiently as gravitational energy? Probably not, was the answer that emerged during the nineteenth century; it would critically depend on the entropy of the system, the somewhat mysterious and illusive concept introduced by Clausius in 1865. Unlike many others (mass, energy, wavelength, electric charge, spin, etc.) entropy is difficult to measure specifically, and yet its effects pervade the universe. These questions also introduce the concept of reversibility. If, for example, we look at the collisions of billiard balls on a table, it would not usually be possible to distinguish whether a movie of the collisions was being played forwards or backwards (assuming that the balls do not leave the table, of course, or are not potted). Newton’s laws of motion also apply whichever way time is made to flow. However, a film of Planck’s errant brick hitting a victim’s head, or of a china cup hitting the floor, would portray irreversible processes; a breaking cup, for example, never puts itself together again. The significance of this common sense statement, or of “the bleedin’ obvious,” as the famous British comedian John Cleese once famously, pithily, and somewhat crudely might have described it, is easy to underestimate. Planck did not do so. Indeed, he was the first scientist to realize the full meaning of the concept of reversibility, and his tireless pursuit of what in most scientist’s eyes were obscure questions of no possible interest or practical benefit led to his discovery of energy quantization, the end of the dominant hegemony of “classical” physics, and the dawn of a new era—not only for science but for every one of us. Carnot’s radical discovery therefore gradually opened a door to reveal that time has a preferred direction and that Nature, therefore, is asymmetrical. Time flows in one direction only; and a movie of any randomly chosen event in the universe, when played backwards, would be artificial and would not necessarily portray something that could be actually observed. In fact, there are very few * For an excellent modern discussion of Carnot’s 1824 publication, see Erlichson (1999).

MAX PLANCK

43

reversible processes, and they occur only in special circumstances. The motions of colliding billiard balls are not actually reversible as the balls gradually come to rest, of course; their energy of motion being converted, irreversibly, into heat. Indeed, it turns out that if a system exhibits only reversible processes, then it is said to be in equilibrium. In such a system, no fluctuations of any kind—in temperature, in density, in gaseous composition, or any other property—are possible, and nothing new can happen. If the system does fluctuate, new processes are set in motion, and the system heads for a new equilibrium. Clausius realized that it would be useful to introduce a new quantity—entropy—that could be used to measure a system’s potential for change. For a closed system undergoing irreversible processes he proposed that its entropy always increases*; for reversible processes, entropy is constant. In the case of the falling cup (Planck’s famous brick is too grisly an example for me) its constituent atoms and molecules are unchanged after the fall, but they are irreversibly redistributed over the floor so that the entropy of the system increases. Thus, nonequilibrium systems are the most interesting, and from our point of view at least, life itself is the ultimate example. We can now return to Max Planck and the genesis of his great work. One of the biggest unsolved problems of the time concerned the nature of so called “blackbody” radiation. When an object, such as a poker, is heated, it emits a spectrum of radiation whose wavelengths (their colors: red, white, etc.) depend on its temperature. The spectrum could be studied by making a furnace with a tiny hole in its surface through which a minute fraction of the internal radiation can escape without, in principle, changing the properties of the furnace. At zero temperature, there would be no radiation, of course; that is, it would appear black. When it is brought to equilibrium at another temperature it will emit the blackbody radiation appropriate to that temperature. Earlier, Planck’s former mentor Kirchhoff had discovered his thermal radiation law, which stated, in effect, that the radiation spectrum coming from a blackbody depended solely on its temperature; the materials used to construct the furnace, or the composition or other properties of any emitting or absorbing objects inside it, played no roles whatever. This remarkable statement appealed to Planck, as it seemed to describe a fundamental property of the universe rather than something as mundane as the happenings within a furnace. As Planck explains in his autobiography (Planck 1950, p. 34–35): I had always regarded the search for the absolute as the loftiest goal of all scientific activity.

He therefore eagerly set himself the task of explaining the reasons behind Kirchhoff’s wonderful law. * Clausius formally defined entropy as the energy available for change in a system divided by its temperature. It can be defined less formally as the quantity of disorder or chaos in a system.

44

MAX PLANCK

The enormous problem was that all earlier attempts to explain blackbody spectra using current theories had not merely failed but failed catastrophically as they predicted that radiation should vary as the square of the frequency (see Figure 2). This would mean that pokers and indeed the homely fireplace would always be extremely dangerous as they should be powerful sources of X-rays! This was clearly nonsense. Planck, and of course all other scientists had they noticed it, would seem therefore to have stumbled across a mountain of ignorance. Something was radically wrong with conventional and “classical” explanations—but what? Planck’s first step was to assume that the cavity of the furnace was filled with oscillators, from where the radiations emitted, absorbed, and propagated by each oscillator would be described by the equations recently discovered by James Clerk Maxwell to explain electromagnetic waves. However, their energy distribution was another matter; and Planck set about explaining that by combining Maxwell’s work with his own on thermodynamics, a departure that led him to consider the oscillators’ entropy rather than their temperature. These seminal steps led him to remark (Planck 1950, p. 38): It was an odd jest of fate that a circumstance which on former occasions I had found unpleasant, namely, the lack of interest of my colleagues in the direction taken by my investigations, now turned to be an outright boon. While a host of outstanding physicists worked on the problem of spectral energy distribution,

1.4

Intensity (arb.)

1.2

Classical theory (5000 K)

5000 K

1.0 0.8 0.6 4000 K

0.4 0.2

3000 K

0.0 0

500

1000

1500 Wavelength (nm)

2000

2500

3000nm

Figure 2 These curves illustrate the dramatic divergence between the predictions of “classical” physics and reality. The observed intensity of blackbody radiation at various temperatures (expressed in Kelvin units) is plotted as a function of wavelength in nanometers. The “classical theory” curve predicts a spectrum that diverges exponentially from observations. (Source: Wikipedia.)

MAX PLANCK

45

both from the experimental and theoretical aspect, every one of them directed his efforts solely toward exhibiting the dependence of radiation on the temperature. On the other hand, I suspected that the fundamental connection lies in the dependence of entropy on energy. As the significance of the concept of entropy had not yet come to be fully appreciated, nobody paid any attention to the method adopted by me, and I could work out my calculation completely at my leisure, with absolute thoroughness, without fear of interference or competition.

This simple statement from one of the greatest scientists who has ever lived provides powerful arguments against current research funding policies. It would have been unthinkable that Planck should at that most crucial stage in his career have been obliged to seek permission from his colleagues before he could proceed further, yet that is precisely what a modern Planck would have to do. One wonders what evidence, if any, today’s policy makers have used to support their radical changes. Whatever its source, it is distributed globally because virtually every agency insists on applying the same mandatory secondguessing filters before public (and many private) funds will be released. Despite Planck’s fame, it seems that his writings and thoughts, as opposed to his discoveries, have been forgotten or that his experience has been deemed irrelevant to today’s world. Planck, of course, was spared all this modern nonsense. He continued to “work out his calculation with absolute thoroughness” and came up with an expression that agreed perfectly with all the data. He published it in October, 1900, but was far from satisfied because he could not give his equations physical meaning. His quest led him to study the relationship between entropy and probability, an approach that had been inspired by Ludwig Boltzmann (1844– 1906) but with which Planck had initially disagreed. Planck had always been a staunch supporter of “classical” physics. He intensely disliked Boltzmann’s claim that atoms were actual entities, as he preferred to believe that they were merely theoretical devices for helping to understand material behavior. Furthermore, he could not accept Boltzmann’s claim that atomic properties could not be definitely described; one can only refer to probabilities. However, Planck later changed his mind, of course, and magnanimously told him so. Indeed, Planck recommended Boltzmann for the Nobel Prize in 1905 and also in 1906, the year in which he tragically died. Planck himself did not win the Nobel Prize until 1918. Boltzmann was a remarkable scientist cursed by remarkable prescience. There was little data to support his clear vision mainly because the experimentalists who might provide them were not convinced that the data were worth looking for. This lack of experimental confirmation led Boltzmann to suffer agonies of self-doubt, as well as actual and intense opposition from Ernst Mach (1838–1918) and Wilhelm Ostwald (1853–1932), two contemporary and highly influential scientists. Boltzmann had always suffered from depression; and perhaps because his work had not yet received the widespread acceptance it deserved (it would do so shortly) and perhaps feeling that he had wasted his

46

MAX PLANCK

life, he (for whatever reason) committed suicide by hanging himself in 1906. It is ironic that his famous tombstone (see Figure 3), carries his equation, “S = k log W,” but Boltzmann never wrote it in that succinct form—that was the work of Max Planck; and, if Boltzmann could look down and see it, he might agree that he could have no finer epitaph nor a better friend. The Austrian physicist Dieter Flamm (1936–2002), a grandson of Boltzmann, described the significance of Boltzmann’s farsightedness with the following tribute (Flamm 1997): Ostwald reports that when he and Planck tried to convince Boltzmann of the superiority of purely thermodynamic methods over atomism at the Halle Conference in 1891 Boltzmann suddenly said: “I see no reason why energy shouldn’t also be regarded as divided atomically.” Finally I would like to mention, that Boltzmann in his lectures on Natural Philosophy in 1903 already anticipated the equal treatment of space coordinates and time introduced in the theory of special relativity.

Figure 3 Ludwig Boltzmann’s gravestone in Vienna, Austria. It carries his famous and iconic equation, S = k log W, in the form created by Max Planck, where S represents the entropy of a system, W is the probability that the system is in the state W relative to all the states it could be in, and k is Boltzmann’s constant.

47

MAX PLANCK

A grandson might tend to exaggerate, of course, but Jacob Bronowski (1908– 1974), a widely acclaimed historian of science, wrote in 1973 (Bronowski 1981, p. 220): Boltzmann was an irascible, extraordinary, difficult man, an early follower of Darwin, quarrelsome and delightful, and everything that a human being should be. The ascent of man teetered on a fine intellectual balance at that point [in 1900] because had anti-atomic doctrines then really won the day, our advance would certainly have been set back decades, and perhaps a hundred years.

Boltzmann’s work may not have been fully appreciated and celebrated in his lifetime, but, in 1906, no one could alter the facts that he had brought his ideas to fruition and that they could therefore be left to the consideration of later workers. Bearing in mind that his peers steadfastly fought against accepting the results of his completed works while he was alive, I hope that even the most vehement defender of current research selection policies would agree, therefore, that a modern and youthful Boltzmann planning radically to challenge equally well-cherished ideas today would probably not even be allowed to begin. This is simply because his peers would have powers to veto such challenges without publicly having to declare their hands. However, I am afraid that my hopes are probably wishful thinking. Our proxies today seem determined to make selection policies ever more restrictive in their relentless quests for the best value for money. Are they aware that according to Bronowski at least, a considerable authority, they are promoting the policies that might retard humanity’s advance? Have they not read the very words Planck used in commenting on Boltzmann’s struggles (Planck 1950, p. 33): This experience gave me also an opportunity to learn a fact—a remarkable one, in my opinion: A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Returning to Planck’s story; his considerations of the relationship between entropy and probability led him to introduce a new universal constant—the elementary quantum of action, which he denoted by the symbol “h,” which expressed the relationship between the energy of an oscillator, E, and the frequency of the radiation it emits, ν, as: E = hν. Thus, Planck’s famous and ubiquitous constant was born and an announcement made to the German Physical Society in Berlin on December 14, 1900; but he was still unhappy. Despite his very considerable efforts over the following years, he was unable to incorporate his new constant into the framework of his beloved classical theories. In 1905, Albert Einstein (1879–1955) published his Theory of Relativity in the Annalen der Physik, a journal that by

48

MAX PLANCK

chance Planck edited. He saw the paper’s enormous importance immediately and bravely published Einstein’s very short and unreferenced paper without the usual peer review. Planck had said many times that he thought a search for the absolute was the noblest task of science, yet here he was vigorously promoting a paper on relativity. There would therefore seem to be a contradiction. But Einstein’s seminal paper was perhaps misnamed; it is indeed concerned with absolutes—the space–time continuum and the velocity of light. In classical theories on the other hand, space–time does not exist, and the velocity of light has only relative significance. Planck summarized all this very powerfully in his Scientific Autobiography and Other Papers (Planck 1950, p. 47): The velocity of light is to the Theory of Relativity as the elementary quantum of action is to the Quantum Theory; it is its absolute core.

My short account is not, of course, intended to cover all the details leading up to Planck’s great discovery. The progress of nineteenth-century science was highly convoluted, as indeed it has always been at any time. Planck did not proceed, step by unfaltering step, to his eventual conclusion. He was sometimes wrong, particularly, for example, in his early disregard for “atomic” theories of matter. On his own admission,* he was slow to “react quickly to intellectual stimulation” and needed time, sometimes considerable time, to adjust to new ways of thinking. Indeed, some of his critics said of him that he had made so many mistakes that eventually he had to find the right answer. But in those days, mistakes that Planck or any other academic might make were matters for them to resolve. Their reputations might suffer in the short term, but their funding, so long as it was modest, would rarely be jeopardized, as would probably be the case today. Einstein was the first to see the full implications of what Planck had done, pointing out that Planck’s oscillators must either have zero energy or an integral multiple of the energy quantum, hν, where ν is the frequency of the emitted radiation. Thus energy had been “atomized” as Boltzmann had predicted it was: “classical” physics, of course, requires that energy streams continuously and may have any conceivable value. Planck could not fully accept Einstein’s conclusion for some years. In 1910, he said (Heilbron 1986, p. 21): The introduction of the quantum of action “h” into the theory should be done as conservatively as possible, i.e., alterations should only be made that have shown themselves to be absolutely necessary.

But eventually he did so, and the highly influential Planck was the first respected theorist to fully and enthusiastically support Einstein’s work despite widespread skepticism, and he courageously played a vital role in gaining its * For an excellent review of Planck’s life and work, see Heilbron (1986).

MAX PLANCK

49

general acceptance. It is no wonder that Einstein held Planck in such high affection and esteem! As a scientist and student of Planck’s work, I cannot even imagine that he would have agreed at the beginning of his career to submit a written account of his future plans for his peers’ critical consideration. He might have said that his sole objective for the next 3 years, say, or indeed for the rest of his life, would be to strive to increase understanding; but such an undefined plea would be unlikely to cut any ice with funding agencies today, whatever the standing of its originator. Planck was renowned for his absolute honesty and integrity, but these qualities are not unique to Planck. Nowadays, the funding rules reward credible dissembling (though many avoid it) and would put a modern Planck at a serious disadvantage. While Planck was still in his early twenties, he also had to deal with the indifference, to say the least, of the two ultra-senior members of his university department—Helmholtz and Kirchhoff—to his earliest work without deflecting in the slightest from his chosen path. His selfimposed task was to prove that entropy—then a virtually unknown concept–was, after energy, the most important property of a system. We should always encourage such youthful ambition, of course, but we must also ensure that we provide the freedom to allow its expression. All too often today such ambition would probably be dismissed as arrogance. Today’s world is utterly different from 1900, of course, but potential sources of ignorance in important scientific fields still abound. Unfortunately, we will clearly identify them only if we constantly subject what we believe we know to sustained and critical examination. Unfortunately, the foundations of current wisdom have, of course, been built on the consensus of the majority, and as Planck would probably agree, the very scientists who have contributed to it are unlikely to agree that it should be radically challenged. I will return to these considerations in Chapter 12.

4 The Golden Age of Physics

Bliss it was in that dawn to be alive. But to be young was very heaven. —William Wordsworth, The Prelude, ca. 1800

At the beginning of the twentieth century, many eminent scientists still doubted the existence of atoms, preferring to see them as useful concepts rather than reality. As late as 1882, Max Planck believed that “atomism,” as the concept was then called, might not be conducive to progress in science. Atoms, he thought, do not favor any direction in space or time and, hence, would be irreconcilable with the principle of entropy increase. He commented (Heilbron 1986, p. 14): Despite the great success that the atomic theory has so far enjoyed ultimately it will have to be abandoned in favor of the assumption of continuous matter.

He was shortly to change his mind, of course. Much later, Edward Andrade, a one-time student of Rutherford’s and a professor of physics at University College London, my own home institution, said in 1957 when speaking about the situation during approximately the first decade of the century (Andrade 1958): It is, perhaps, not unfair to say that for the average physicist of the time, speculations about atomic structure were something like speculations about life on

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

50

THE GOLDEN AGE OF PHYSICS

51

Mars—very interesting for those who liked this kind of thing, but without much hope of support from convincing scientific evidence and without much bearing on scientific thought and development.

However, scientists’ general indifference to problems could not at the turn of the twentieth century, or even in 1957, have resulted in their being banished beyond the pale of projects worthy of being funded. In those days, there were no centralized funding agencies to push policies in one direction or another, whereas today we have them in abundance. Thus, a few scientists of their own volition, and perhaps inspired by Planck, slowly but progressively turned their attention to problems that could barely have been imagined in the nineteenth century to produce in the new century’s first few decades an explosion of knowledge on the fundamental structures of matter that dramatically extended the revolution that Planck had so reluctantly begun. I do not presume to attempt a history of this Cultural Revolution. That would take a book of monumental proportions and an ability to match.* However, building on my introduction to Planck’s great works in Chapter 3, I will continue to outline some of its beginnings. As before, my purpose will be to assess the likelihood that the modern successors of its leading players, such as Joseph John (“JJ”) Thomson, the “father of the electron”; Ernest Rutherford, the “father of the nucleus”; Niels Bohr, the “father of the atom”; Wolfgang Pauli, the discoverer of the exclusion principle; and Werner Heisenberg the discoverer of the uncertainty principle, would (if they were setting out today) be free to turn their fresh and fertile minds to some of the twenty-first century’s intractable scientific problems. I have not specifically included Albert Einstein in this list mainly because his story is so well known—see Poster 2 for a short summary of his youth—but his powerful influence is ever present throughout these astonishing times. However, would his modern successors or others of similar intellectual stature have the freedom today to play such roles? One could argue that scientists as gifted as Einstein come along only once in a hundred years or more, so we might never see their like again in our lifetime. But world population today is over 4 times what it was at the close of the nineteenth century; and, in addition, women are now much more likely to be educated to the highest standards, thereby for the first time in history roughly doubling the pool of potential candidates. It might be almost 10 times more likely, therefore, that we might see another such genius as Einstein, whatever the sex, but of course it may not be in physics. Those in possession of towering intellects are, of course, not easily deflected, and defenders of today’s ubiquitous rules might say that such people will always succeed whatever obstacles fate deploys so that it is unnecessary to make special provision for them. But we also know from the biographies and * Two books that go some way to meeting this formidable specification are Abraham Pais’ Niels Bohr’s Times (1993) and Ronald W. Clark’s, Einstein: The Life and Times, (1982).

52

THE GOLDEN AGE OF PHYSICS

Poster 2: Albert Einstein’s Inauspicious Youth Born in 1879 in Ulm, a German city on the banks of the Danube, to wellto-do parents, Einstein’s scientific career could not have had a less promising beginning. Unable to speak fluently until he was 9 years old, his school performance was below average except in mathematics; and his headmaster, when asked to advise on the profession Albert might take up, said, “It doesn’t matter, he’ll never make a success of anything.” Moving on to secondary school—the Luitpold Gymnasium in Munich—he was still thought to be somewhat backward, but even worse, came to be regarded as a disruptive influence. His biographer, Ronald Clark, reports (Clark 1982, p. 35) that Einstein was: A precocious, half-cocksure, almost insolent boy who knew not merely which spanner to throw in the works, but how best to throw it.

No doubt, he was also rebelling against the Gymnasium’s authoritarian teaching methods that demanded, “the obedience of the corpse,” Kadavergehorsamkeit as he described it. Not surprisingly, he did not do well. His school, unwilling to tolerate his contemptuous arrogance, would also seem to have been malicious; it expelled him when he was 15 years old. To make matters worse, Einstein decided soon after that he no longer wanted to be German and renounced his citizenship at age 16. Now stateless, he was also without the qualifications almost all universities demanded for entry at that time. However, Einstein’s supreme confidence was undented, and he had the backing of a very supportive family. In particular, his mother’s relatives were well-off, and they agreed to sponsor the rebellious lad through the university—ETH in Zürich, one of Europe’s finest—but just as important, one of the few universities whose only requirement was that a candidate should pass its entrance examination, which he did in 1896. However, Einstein had not changed his spots. He described his physics professor, Heinrich Weber, as “plodding and conformist,” as his courses made no mention of Maxwell and his revolutionary works. Einstein was therefore obliged to study them at home. One can imagine that his demeanor towards his professor was probably somewhat less than respectful, but Weber had the last word. On graduation in 1900, Einstein was not offered an academic post at ETH although that was the general custom; indeed, he was the only one of his graduate friends to be so treated, thereby blocking the usual route to an academic career. To add to his difficulties, his family sponsors considered that as he had graduated he could stand on his own feet, and they stopped his allowance. Thus, he had to search for employment; and, with a little help from his friends, he found it, as everybody knows, in the Bern Patent Office. Ironically, this new career path was not a million miles away from the one his father had always planned for him in electrical engineering.

THE GOLDEN AGE OF PHYSICS

53

writings of these pioneers that they were not immune to the torments of selfdoubt. We should not be surprised. Recall the timeless lines of one of the greatest intellects of all time, William Shakespeare, which apply as much to science as to any other human endeavor: To be, or not to be, that is the question: Whether ’tis Nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them. .  .  . Thus conscience does make cowards of us all. And thus the native hue of resolution is sicklied o’er with the pale cast of thought and enterprises of great pitch and moment with this regard their currents turn awry and lose the name of action.

Nowadays, all scientists, self-doubters or not, are nevertheless required to compete with their closest colleagues not only to win acceptance for their ideas, which struggles are parts of the fabric of healthy scientific inquiry, but for the funds they will usually need to begin their assaults on convention. To get funded they must argue convincingly in writing that the problems they wish to tackle will yield the best possible returns on the requested funds as measured against whatever criteria their target agency is currently imposing. Furthermore, the need to demonstrate such narrowly based competiveness is never ending. Approval must be obtained for every significant new phase of a work, and funding can be lost at any time. It almost defies belief that funding agencies everywhere claim that these new arrangements are not adversely affecting creativity. Joseph John Thomson Joseph John Thomson (1856–1940) was born in Manchester as the son of a prosperous bookseller and publisher. His father had planned that he should serve an engineering apprenticeship, but while he was waiting for a vacancy he began at the exceptionally young age of 14 to study at Owens College (later to become Manchester University). Unfortunately, his father died shortly thereafter; and the family could not raise the necessary premium for the apprenticeship, then approximately £50, or the equivalent of £5000 of today’s money. Undeterred, he won a scholarship that allowed him to continue studying at Owens and, when he was 20, won another to study mathematics at Trinity College Cambridge, which of course was Isaac Newton’s old college. Thomson was clearly setting his sights at the highest possible level. Cambridge would seem to have agreed that his ambitions were fully justified, as only 5 years later he was made a Fellow of his college. Three years after that, in 1884, he was offered and accepted the post of Professor of Experimental Physics at the Cavendish Laboratory, one of the world’s foremost labs (where his predecessors had been James Clerk Maxwell and Lord Rayleigh) and elected a Fellow of the Royal Society. He was still only 28! Had his father lived, one wonders what he might have done in engineering that could have matched such a meteoric rise.

54

THE GOLDEN AGE OF PHYSICS

His illustrious predecessors would, of course, have been hard acts to follow, but it seems that Thomson was determined from the beginning to go even further than Maxwell. His talents in the laboratory, in which he studied electric discharges in gases, became legendary. F. W. Aston, a student of Thomson’s who later was to win the Nobel Prize in Chemistry for his discovery of isotopes, said in his obituary of Thomson:* When results were coming out well his boundless, indeed childlike, enthusiasm was contagious and occasionally embarrassing. Yet when hitches occurred, along would shuffle this remarkable being, who, after cogitating over his funny old desk in the corner, and jotting down a few figures in his tidy handwriting, on the back of somebody’s thesis or an old envelope would produce a luminous suggestion, like a rabbit out of a hat, not only revealing the cause of trouble, but also the means of cure. This intuitive ability to comprehend the inner working of intricate apparatus without the trouble of handling it appeared to me then, and still appears to me now, the hallmark of a great genius.

Perhaps Thomson was revealing his engineering prowess after all? Thomson had set himself the task of resolving a long-running controversy. When electricity was passed through a low-pressure gas, as in a fluorescent lamp for example, it had become clear that the cathode was emitting some sort of rays. But interpretations were strongly polarized. According to Britishbased physicists, these rays were “hard” charged particles; but their Germanbased counterparts, observing that they could cast shadows, had concluded that they must be a form of radiation, waves that were a propagation of a disturbance in the ether. The German radio-wave pioneer, Heinrich Hertz (1857–1894), had also observed that they were not deflected when subjected to strong electric fields, so the “waves” could not possibly be electrically charged. Here is yet another example of how social pressures can affect research policies and outcomes. There could be no question, of course, that German physicists had any less insight or intuition than those based in Britain. But one’s working environment can exert subtle pressures, even though at the beginning of the twentieth century researchers everywhere were generally free in principle to do as they pleased. Nevertheless, it should not be surprising that those pressures might occasionally lead to local blind spots or reluctance to challenge the generally accepted views of one’s colleagues. In contrast, the pressures in today’s world are far from subtle and indeed can often lead to the veto of unpopular proposals. However, Hertz’s observation that the cathode radiation, whatever it was, was not deflected by electric fields was obviously crucial. It could not be ignored, and it immediately attracted Thomson’s attention. His explanation was that the collimated beam of cathode radiation was being scattered or diffused by residual atoms in the low-pressure * F. W. Aston writing in The Times, September 4, 1940, quoted in Pais (1993) p. 118.

THE GOLDEN AGE OF PHYSICS

55

gas, thus dispersing the beam and making any deflection impossible to observe. His explanation had to be tested, of course. By using more efficient vacuum pumps, he substantially reduced the pressure in the tube and hence the probability of scattering or diffusion, from where he clearly saw the deflection produced by the electric field: therefore, the particles must be electrically charged. His next step was to measure the ratio of electrical charge to mass—e/m— for the cathode particles (he called them corpuscles; only later were they called electrons); and for this he used a characteristically ingenious method. Once again, he collimated the cathode beam but now arranged that the deflection by the electric field was exactly equal and opposite to the magnetic field’s, which he could also vary. Thus the beam would pass unaffected through both fields. If he then switched off one of the fields, the measured deflection would lead to a very accurate measure of e/m for his corpuscles, a value that he could calculate using the very equations his illustrious predecessors Newton and Maxwell had so presciently provided for him long ago. He was also able to measure the particle’s charge, which he found to be the same as that of an ionized hydrogen atom but opposite in sign, and the particle’s mass, which he found was some two-thousandth of that particular atom. Repeating these measurements for various gases in the evacuated tube and different cathode materials led to identical results. Thomson therefore concluded, in an announcement made in 1897, that his corpuscles were universal elementary constituents of matter as they had the same character independently of their source. But he was “somewhat startled” to realize that his observations indicated a state of matter more finely subdivided than atoms, which according to the ancient meaning of their very name should be indivisible. Thomson won the Nobel Prize for Physics for this discovery in 1906. The Golden Age had well and truly dawned. As for any major new field, all was not plain sailing. Now that he had discovered that atoms had constituents, they must therefore have structure; and Thomson was determined to discover what it was. Atoms had to be electrically neutral, of course, and any model also had to account accurately for their mass. Thus, Thomson postulated in 1903 that the negative charge of the electrons is compensated for by a positive charge distributed over a sphere of the correct radius for an atom. He also assumed that the sources of positive charge do not contribute to atomic mass, which he proposed was made up entirely by his corpuscles. Therefore, an atom must contain thousands of them. But he quickly realized that such a “plum pudding” structure, as it came to be called, would not be stable unless all the corpuscles were at rest. That would be highly unlikely. Thomson then proposed in 1906 that the experimental observations of X-ray scattering by atoms could be explained if the atom had roughly the same number of corpuscles as its atomic number, perhaps his most significant discovery as a theoretician. But the drawback was that the corpuscles could not therefore account for an atom’s mass. Where did that

56

THE GOLDEN AGE OF PHYSICS

Figure 4 Photograph of J. J. Thompson, later Lord Kelvin, taken in 1899.

come from? Pais points out that on this question, Thomson was silent (Pais 1993, p. 119). Ernest Rutherford Ernest Rutherford (1871–1937) was born near Nelson in New Zealand’s South Island to British-born parents, the fourth of twelve children. He excelled at school, particularly in mathematics, and took a double first-class honors degree in mathematics and in mathematical physics at Canterbury College in Christchurch. This is rather surprising as it was said of him (Pais 1993, p. 123) when much later he was at the Cavendish Lab that he had, “neither much expertise nor much taste for theoretical physics.” But he was renowned for his inventiveness, which he later claimed was honed on the challenges of helping out on his parents’ farm, and the homespun philosophy, “We haven’t the money, so we’ve got to think.” Contrast that with the philosophy embraced by virtually every funding agency today, “We haven’t the money, so we’ve got to prioritize.” The actions these philosophies imply are, of course, poles apart. The first requires deep, open-ended contemplation and analysis while the other merely involves choices between clearly defined alternatives based on existing modes

THE GOLDEN AGE OF PHYSICS

57

of thinking. Unfortunately, we are unlikely to see significant reform until funding agencies appreciate these differences and modify their rules accordingly. In 1895, Rutherford was awarded an 1851 Exhibition Scholarship, which he used to study under J. J. Thomson in the Cavendish Laboratory in Cambridge. At first, he continued with his work begun in New Zealand on the detection of transmitted radio waves, a discovery made before Marconi’s announcement in 1896 of his wireless transmission of telegraph signals. However, with a generosity of spirit he was to show over and again in later life, he did not seek the priority that in today’s world would win him brownie points galore, presumably because he could see the possibility of richer pastures in the Cavendish. Indeed, he had been flattered that Thomson had invited him to study the effects of X-rays on gaseous discharges, an offer he could hardly have refused as the great man clearly thought that Rutherford was capable of making an impact in this new field—the discovery of X-rays by the German physicist Wilhelm Röntgen (1845–1923) had been published only a few months earlier.* A year later, Rutherford turned to radioactivity and almost immediately announced the discovery of two new types of radiation, α-rays or particles, which are easily absorbed, and β-rays or particles, which are much more penetrating. The French physicist Henri Becquerel (1852–1908) had discovered the phenomenon of radioactivity in 1900 and proved that β-rays are electrons and indeed were the “corpuscles” that Thomson had discovered. In 1898, Rutherford wanted to get married but did not earn enough to do so, and the rules at Cambridge at that time did not allow non-Cambridge graduates to be Fellows or be given early consideration for senior appointments. His dilemma was resolved by a move to Montreal, Canada, as the Macdonald professor of physics at McGill University. As a colonial himself, one can imagine that the Montreal post appealed to him. He shortly wrote to his wife-to-be, “I am expected to do a lot of work and to form a research school in order to knock the shine out of the Yankees!” But he did very much more than that, of course. One of the reasons he chose McGill was that it enjoyed generous funding and the freedom to match provided by a Scottish born philanthropist based in Montreal, Sir William Macdonald (see Poster 3). Rutherford’s reputation was such that he could attract such brilliant young students as Frederick Soddy who had worked with Thomson and developed the concept of “half-life” for the decay of radioactive elements, and Otto Hahn, who was later to discover nuclear fission; and they quickly laid down the foundations of radioactivity. Awards for Rutherford flooded in, culminating in his winning the Nobel Prize for Chemistry in 1908 for his, “investigations into the disintegration of the

* In 1901, that discovery would also win Röntgen the first Nobel Prize to be awarded in physics.

58

THE GOLDEN AGE OF PHYSICS

elements and the chemistry of radioactive substances.” Among other things, he proved that the α-rays are helium atoms* ejected from radioactive elements such as radium, thorium, and uranium as a result of an intense “atomic explosion,” as he put it, which was perhaps the first use of these fateful words in print. He also commented in his Nobel Lecture (Rutherford 2013): These experiments brought clearly to light the enormous energy, compared with the weight of matter involved, which was emitted during the transformation of the emanation. It can readily be calculated that one kilogram of the radiumemanation and its products would initially emit energy at the rate of 14,000 horse-power, and during its life would give off energy corresponding to about 80,000 horse-power for one day.

But despite all this, the Prize was for Chemistry. As he often told friends, the fastest transformation he knew of was his transformation from a physicist to a chemist! However, the work for which Rutherford was to become most famous had hardly yet begun. In 1907, just before he was awarded the Nobel Prize, Rutherford moved to the University of Manchester as professor of physics in order to be closer to the major scientific action. Now a world-famous scientist, he attracted experienced students who already had done important work. Rutherford suggested to Hans Geiger and Ernest Marsden that they study α-particle scattering from gold atoms. To do that, they collimated α-particles from the radioactive decay of radium to make a beam aimed at a very thin target of gold leaf. When α-particles hit a luminescent zinc sulfide screen they produce a faint flash of light, so a diligent and patient observer sitting in a darkened room could detect the scattered particles and also measure their scattering angle. This is a very laborious process indeed as I can testify from my student days at Liverpool University. But I was not doing frontline research, merely learning my trade, and I knew the answer that was expected! Most particles were scattered by a few degrees or less, a result which was in accord with Thomson’s model by which electrons are distributed as negatively charged, very light “plums” dotted throughout a “pudding” of smeared-out positive charge over the atomic volume. But they also found that a tiny number of them were deflected by 90° or more, a result that Rutherford has frequently described as the most incredible in his life, being widely attributed to have commented: It was as if one fired a 15-inch shell at a tissue paper and it had bounced right back.

* The particles emitted from radium, are not in fact helium atoms, which are electrically neutral, but helium nuclei with an electrical charge of +2; that is twice that of the electron and opposite in sign.

THE GOLDEN AGE OF PHYSICS

59

Poster 3: Sir William Macdonald and McGill University It’s an ill wind that blows nobody any good.

William Christopher McDonald (1831–1917) was born in Prince Edward Island, Northern Canada, from Scottish stock who had emigrated in 1772. (He changed his name to Macdonald in 1898 on being knighted.) He and his brother, Augustine, moved to Boston as general merchants, before returning to Canada in 1858 to set up a tobacco importing business in Montreal. They could hardly have made a more timely choice. Most North American tobacco at that time was grown in the Southern states of the United States—shortly to become the Confederate States, of course. When the American Civil War began in 1861, the Northern states lost their tobacco suppliers, upon which huge opportunity the McDonald brothers seized; indeed they seemed to have cornered the American crop. Using oceangoing ships, they transported tobacco leaf from the Southern states to Montreal where they processed it before selling it at vast profit to the Northern states’ anxiously waiting chewers and smokers. In 1865, the partnership with his brother was dissolved, and William operated under his own name. He became very rich but was also somewhat ashamed that his wealth had come from tobacco, the use of which he despised; however, he found that he could ease his conscience by philanthropic giving. The target of much of his generosity was McGill University in Montreal, which was named after another prominent merchant of Scottish stock, James McGill. Macdonald concentrated his philanthropy on the sciences and financed new buildings for engineering, chemistry, and physics as well as a library, and also generously endowed chairs and provided the latest equipment in these scientific fields, all given without strings—except perhaps his chauvinistic wish that his beneficiaries should strive to put some American universities in the shade! It is estimated that his bequests to McGill exceeded $13 million in money of the day, a truly prodigious sum worth perhaps more than $500 million today. He took particular pride in being able to help Rutherford in some of his great works and to win the Nobel Prize.

Here is yet another example of an ugly fact destroying a beautiful theory. This result meant that Thomson’s model must be wrong, therefore, as a diffuse assembly of light electrons could not deflect a heavy fast-moving α-particle by such a huge amount,* but it took Rutherford more than a year to “devise an atom much superior to JJ’s,” as he put it.

* α-particles are almost 8000 times more massive than electrons.

60

THE GOLDEN AGE OF PHYSICS

Rutherford’s proposed atom was made up of Z electrons (where Z is the element’s atomic number—for gold it is 79) and a small central body of positive charge Ze in which virtually all the atom’s mass is concentrated. Thus, the large scattering angles could be explained as the result of the electrostatic repulsion of the doubly charged α-particle by the highly charged massive central body—the nucleus, as it was later called. He also estimated that the nuclear radius was some 100,000 times smaller than the atom’s. Thus, an atom expanded to fill a typical domestic room would have a nucleus the size of a grain of sand at its center; that is an atom would consist almost entirely of empty space. As they had observed, therefore, most α-particles would pass largely unscathed through the thin gold leaf, as there would be very little probability of hitting anything substantial. Rutherford announced this discovery in May, 1911, but it was generally ignored. Rutherford himself also put the discovery to one side perhaps because if it were correct it proved that his friend the great J. J. Thomson must be wrong. But it is also possible that Rutherford did not wish to focus attention on himself. While at Manchester, he did not “sign” a third of the papers on radioactivity, even though he initiated almost every investigation, often doing the preliminary work and passing the topic to a student or a colleague. He did not add his name to Geiger and Marsden’s publication on the large-angle scattering of alpha particles that he announced in 1911, nor much later to James Chadwick’s on the discovery of the neutron, a major milestone in the history of atomic physics, nor to Cockcroft and Walton’s paper announcing the splitting of the atom using a particle accelerator. Such humility would be very rare today, as funding is so sensitively dependent on the numerical magnitude of one’s publication record. For whatever reason, however, according to Pais, he did not raise the subject at the first Solvay Conference* of October, 1911, held in the Metropole Hotel in Brussels, a select gathering of the world’s most prestigious physicists (Pais 1993, p. 125). But on his return to Manchester, Rutherford met Niels Bohr for the first time. He most definitely did not ignore it. Niels Bohr Niels Bohr (1885–1962) was born in Copenhagen to one of the most famous families in Denmark in a house facing the Danish parliamentary buildings— Christiansborg Palace—that had once been a home of the King of Greece. His father, an eminent physiologist who became Rector of Copenhagen University, awakened his interest in physics; and his mother came from a family distinguished in the education field. Bohr therefore did not lack for social standing or intellectual stimulation. Indeed his mother also ensured that he was kept

* The Solvay Conferences were founded by the Belgian industrialist, Ernest Solvay, and continue to the present day.

THE GOLDEN AGE OF PHYSICS

61

in touch with events wider afield, such as reports on Rutherford’s inspirational discoveries on radioactivity described in his Yale lecture in 1905, just after Bohr had started his University studies. While Bohr might have enjoyed every advantage on setting out, it could easily have led to complacency and an expectation that life’s gifts would continue to arrive on a plate. It did not do so in his case, however; and his “silly fierce spirit,” as he described it, drove him constantly to search for tough challenges. While still an undergraduate, he entered a Prize competition that involved finding a way of implementing in the lab a theoretical proposal made by Lord Rayleigh in 1879 for measuring the viscosity of a liquid. The work required was exclusively experimental, of course, a field in which Bohr hardly excelled; and his physics department had no laboratories! Undaunted, however, he did the work in his father’s lab, devised a viable method, won the Prize, and submitted the results for publication by the Royal Society. For his PhD thesis, his supervisor, Professor Christian Christiansen, asked him to assess the extent to which electron theory explained the physical properties of metals, returning him therefore to his theoretical strengths. Perhaps for the last time in his life, he embarked on an extensive and critical survey of the literature. Henceforth, Bohr would prefer to develop his ideas through intense and prolonged personal discussions, as only then could he fathom what people were really thinking on the state of progress in a field. Today, such an approach could prove embarrassing and perhaps even impossible as the very people a present-day Bohr would consult might be competing with him for the same funds, another unintended consequence of current policies that funding agencies ignore. Bohr’s study reviewed the idea that metals consist of positively charged atomic nuclei around which large numbers of “free” electrons buzz randomly. Applying an electric field causes the electrons to move in concert, thereby creating an electric current. Thus, the electrons can be considered to behave like the “atoms” in a gas, and Boltzmann’s theory of gases should therefore apply. Bohr produced a comprehensive treatment of all this conventional wisdom, but pointed out that this essentially “classical” approach could not explain why, for example, some metals exhibit magnetism. Bohr was clear, therefore, that his thinking on this problem had brought him to the end of the “classical” road; henceforth any progress would require a move into the mysterious and almost unexplored quantum domain hinted at by Max Planck a few years earlier. Following Copenhagen’s award of his PhD, Bohr moved to the Cavendish because he regarded “Cambridge as the center of physics and Thomson a most wonderful man,” and he wanted to discuss his PhD thesis with him. Apparently, however, they did not get on at first, perhaps because Bohr with his typical brutal but invariably impeccably polite honesty had opened their discussion by pointing out that there were some problems with his atomic model—indeed it was wrong! But he survived, which is also a tribute to Thomson’s greatness and the environment he had created that encouraged open

62

THE GOLDEN AGE OF PHYSICS

discussion. Subsequently, he went to meet Rutherford in Manchester to ask if he could study radioactivity with him, to which move Thomson agreed. Rutherford then asked him to look at the absorption of α-particles in metals, the complement of the α-particle scattering problem. Here, the expectation would be that the α-particles would tend to “see” only the metal’s numerous electrons; the microscopic nuclei would be mainly invisible. Thus, the probability of α-particle absorption could be relatively easily calculated from their known electrical interactions with electrons; and according to his young friend and fellow student at Manchester, Charles Galton Darwin, the grandson of the illustrious Origin-of-Species Darwin, the absorption probability should depend primarily on Z, the atomic number. Another close friend and student Georg von Hevesy, a Hungarian nobleman, had also told Bohr about Soddy’s proposal to give the name “isotopes” to the chemically identical versions of an element that have different atomic weights, which Soddy would publish the following year (1913). As Pais relates, it immediately struck Bohr that while isotopes have different atomic weights they should all have the same Z (Pais 1993, p. 126). He also realized that radioactive decay could be explained if the α-and β-particles came from the nucleus rather than from the atom generally. Thus, α-particle emission would cause Z to change by +2, while for electron emission the change would be −1. Furthermore, as Darwin’s theory for α-particle absorption did not quite agree with the merciless data, Bohr thought that the discrepancy might be caused by Darwin’s assumption that the electrons were completely free. But what if they were tied in some way to the nucleus—that might surely affect Darwin’s calculations? Thus the germs of what came to be known as the Bohr atom began to ferment in Bohr’s fertile mind. Returning to Copenhagen in 1912, Bohr planned to get married and was therefore in want of a job. Christiansen, the only professor of physics in Denmark, was retiring due to ill health; and his post was awarded to Martin Knudsen, thereby closing Bohr’s obvious but ambitious immediate route to employment. Luckily, Knudsen asked Bohr if he would be his teaching assistant while awaiting a more permanent position. He accepted, of course, as he could then marry and continue with the work begun at Manchester. There were now many workers in this field, and the generally agreed problem was that if atomic electrons orbited the nucleus like planets around the Sun (with electrical forces substituted for gravity) their radii, as for the planets, could have any value whatsoever. In addition, electrons moving in circular orbits must be accelerating, and accelerated electrons always emit radiation—that is how radio waves are generated, for example. These orbits would be unstable, therefore, because electrons would lose energy and spiral into the nucleus, the decaying atom thereby emitting a continuous spectrum of radiation. But that was not observed. Atomic radiation spectra usually consisted of a series of specific frequencies, usually called “lines.” There were other important developments. John Nicholson from the University of Cambridge suggested that an

63

THE GOLDEN AGE OF PHYSICS

electron’s angular momentum about the nucleus should “only rise or fall by discrete amounts,” that is, in modern parlance it should be quantized. Many years earlier, Johan Balmer, a mathematician from Basel, had published a simple empirical formula that accounted precisely for the observed hydrogen line spectrum; but his approach had been strictly mathematical; he had cracked the problem as if it had been a cryptic code. There was no physics in it. As new spectral lines continued to be discovered, Balmer’s formula fit their measured frequencies precisely; but no one could explain why it was so good. For whatever reason, Bohr did not hear about Balmer’s work until 1913, but taken together with Nicholson’s contribution, Bohr’s own ideas quickly began to fall into place. Turning his attention specifically to the hydrogen atom, Bohr now knew that classical physics could not help and that any solutions could come only from quantum theory. Planck and Einstein had led the way, of course, but they had dealt with vast assemblies of particles. Bohr’s system had only two—a hydrogen nucleus and an electron—virtually the most fundamental system that could be conceived; and such averaging “statistical” approaches as his illustrious predecessors had used could not be applied. Following a few weeks of intense thought, he proposed that the orbiting electron’s kinetic energy was quantized—its angular momentum could have only have values that were integral multiples of h/2π. Thus, his theory would strictly limit the number of allowed orbits, which he called “stationary states,” their energy increasing incrementally as they moved closer to the nucleus. So far, so good, but that was not enough; there would still be nothing to stop electrons even in these states from losing energy and falling into the nucleus. The 28-year-old Bohr was undaunted. This most junior member of the only physics department in Denmark, firing his “fierce spirit” to the utmost, then made one of the most courageous proposals in science by simply asserting with supreme confidence that the “ground state,” the orbit in which electrons had the lowest allowed kinetic energy, was stable. It could not radiate. An electron in the ground state would stay in it forever unless energy was provided to move it temporarily to higher energy states or to eject it completely to leave behind a hydrogen ion. The hydrogen atom’s spectrum of line frequencies would then stem from electrons transiting between these allowed states, according to the equation:

En − En−1 = hν,

where En and En−1 are the energies of two stationary states that form the initial and final states of the radiation process, h is Planck’s constant, and ν is the frequency of the emitted line. As the line spectrum of the hydrogen atom was well known, Bohr could use their values to calculate the energy of the ground and excited states, which data not only enabled him to derive the Balmer formula but also provided some physical justification for it; a tour de force indeed. All this was soon followed by calculations for the spectrum of

64

THE GOLDEN AGE OF PHYSICS

ionized helium atoms—which had a single electron orbiting a doubly charged nucleus and hence had a similar spectrum to atomic hydrogen. This theory, which united Rutherford’s concept of the nuclear atom with the PlanckEinstein quantum theory, led to Bohr being awarded the Nobel Prize in Physics in 1922. Thus began the first faltering steps in the long-running saga of quantum theory’s evolution. As yet, among many other things, the movements of the individual atomic constituents were unexplained. Indeed, Rutherford had written to Bohr about his proposals to say: There appears to be a grave difficulty with your hypothesis, which I have no doubt you fully realize, namely how does an electron decide what frequency it is going to vibrate at when it passes from one stationary state to another? It seems that you have to know beforehand where it is going to stop.

The question seems to indicate that Rutherford had not fully accepted Bohr’s implicit conjecture that classical theories, and indeed classical thinking, were not capable of explaining atomic structure. But that thinking, which was as old as science itself, was deeply embedded in Rutherford’s question, so that even if Bohr had known the answer he might still have had difficulty in finding the right words. Bohr’s audacity had pushed the doors of the quantum world open a little wider. There would be many false starts before more convincing stories could be told, and Rutherford’s apparently killer question could be shown to be meaningless. Today, few newly qualified postdocs of Bohr’s age in 1913 (see Figure 5) would be given the freedom to follow their intuition wherever it may lead, only later being required to justify their steps, if necessary, or to await posterity’s judgment. Intuition cannot be justified a priori simply because it necessarily takes place outside the framework of conscious reasoning. However, it is a priceless human trait, especially when the mind in question has been carefully prepared by years of contemplation. But today’s rules are far too prosaic and insist on chapter and verse at every stage. Bohr’s proposal was initially based solely on intuition (as was Einstein’s that the velocity of light was constant for all observers); only later was Bohr’s confirmed. A Manchester colleague, fellow postdocs, and rising star, Harry Moseley, wisely said in a letter to Bohr written only a few weeks after he had blasted off his bolt from the blue: Your theory is having a splendid effect on Physics, and I believe when we really know what an atom is, as we must within a few years, your theory even if wrong in detail will deserve much of the credit.

Sadly, Moseley’s promising career was brought to a tragic end less than 2 years later when, as a second lieutenant in the British army, he was killed at Suvla

THE GOLDEN AGE OF PHYSICS

65

Figure 5 Photograph of Niels Bohr taken in 1913. (Source: Wikipedia.)

Bay in the Dardanelles, Turkey. Today, there are many confounding problems (and not only in physics!) that have resisted all conventional and expensive onslaughts for decades, but who knows what successes the liberated intuitions of what should be an iconoclastic youth might have? As things stand, we may never know. It is, of course, impossible for me or anyone else to prove that the mountains of spirit-sapping bureaucracy through which every researcher must now tunnel have prolonged these extended stalemates. But as I have often said, those who changed the rules should be obliged to prove that they are not having these effects. Wolfgang Ernst Pauli Wolfgang Ernst Pauli (1900–1958) would also seem to have been destined to be a scientist from birth. His father Wolfgang Joseph Pauli was a medical doctor turned physical sciences university researcher, and his godfather after whom his father named him was Ernst Mach (1838–1916), professor of physics at the University of Vienna and one of the nineteenth century’s most inspirational scientists. Pauli’s father had changed his name from Pascheles to Pauli and converted to Catholicism only a year or so before his son was born in the hope that the changes would help his career in the face of anti-Semitism. As his biographer Charles Enz explains, his family had not been “good Jews,” and they had no intention of being “good Christians” (Enz 2002, p. 8). He merely

66

THE GOLDEN AGE OF PHYSICS

wanted to be a good professor of chemical medicine. Not surprisingly it seemed to work, and he soon got the appointment he wanted at the University of Vienna. In 1872, Mach had been instrumental in putting perhaps the final nail in the coffin of Isaac Newton’s concept of “absolute true and mathematical time,” which “flows equably without relation to anything external.” Mach had asserted that the inertial properties of matter so accurately described by Newton’s Laws are not, and indeed cannot be, absolute but arise from relative motions with respect to the distant stars and galaxies (see Poster 4). Einstein later was to acknowledge that this assertion, Mach’s Principle as he dubbed it, had helped to inspire his creation of the General Theory of Relativity published in 1916, a theory that should surely rate as one of Homo sapiens crowning glories. Pauli was a child prodigy. Entering the Döbling Gymnasium in Vienna when he was 10, he was soon bored by its courses’ slow progression. With his reading guided by Mach and expert intuition from Hans Bauer (1891–1953), a theoretical physicist, the young Pauli made impressive progress and almost incredibly made himself expert in Einstein’s general relativity theory, keeping step with the development of Einstein’s own ideas on this monumental work at every stage. Indeed, Pauli published his first paper on that subject when he was 18, shortly after graduation from Döbling. Moving to the University of Munich, he was soon noticed by the professor of physics, Arnold Sommerfeld (1868– 1951), who over his distinguished career inspired so many gifted scientists in their own areas of expertise that he should perhaps be regarded as the father of theoretical physics. Seven of his students won Nobel Prizes; and, as Einstein put it, he “pounded out of the soil” more than 20 others whose contributions were similarly significant. Sommerfeld recognized Pauli’s talents immediately and encouraged him to write a review article on the general theory. Published in 1921, Sommerfeld proudly sent it to Einstein, who commented, “Whoever studies this mature and grandly conceived work might not believe that its author is a twenty-one year old man.” Einstein never flattered, and indeed, Pauli’s coming-of-age paper is even today regarded as one of the general theory’s definitive expositions. As a student, Pauli wrote about the “shock” to his classically trained mind on first hearing Bohr speak about his quantum theory at a lecture he gave at Göttingen. Nevertheless, he embraced Bohr’s ideas immediately and was awarded a PhD in 1921 for his work on the theory of ionized molecular hydrogen. Bohr meanwhile had continued his attempts to explain the most recent observations of “fine structure” in spectral lines—that is, the complexities induced in line frequencies when external electric or magnetic fields are applied to the emitting source—by postulating that electronic orbits may be elliptical as well as circular. This is a significant departure because elliptical orbits can rotate about their centers, which concept of course has no meaning if they are circular; and, in addition, ellipses can be specified by the ratios of their minor-to-major axes. But this treatment is essentially classical; and, in

THE GOLDEN AGE OF PHYSICS

67

Poster 4: Mach, the Universe, and You Mach asserted in 1872 that a body’s—yours, say—inertial properties, that is the extent to which it resists all changes to motion, are governed by its acceleration with respect to the distant stars and all other matter in the Universe. The words “absolute acceleration” have no meaning. But as you glance at the stars on a clear night it might be difficult to accept that they have any influence other than inducing a sense of wonder at their beauty. Consider the following experiment: The stars appear stationary, of course; but if you begin a fast pirouette, two remarkable things happen. The stars appear to rotate and your arms are pulled away from your body. Why? Mach thought that these events were intimately related. Michael Berry, a University of Bristol physicist, has given a rough explanation of how the stars can in these circumstances exert an “inertial force” on our arms and why we are all, therefore, constantly subject to the forces of cosmology (Berry 1991, p. 37–39). Gravity’s inverse square law accounts for the static forces we experience, such as those that keep our feet firmly on the ground, for example, but there are as many distant objects in the universe above us as below, so their contributions cancel out; in any case, that law is not acceleration dependent and, as Newton’s laws show, cannot account for the forces hurling your arms out. In analogy to electromagnetic induction, however, there should be another gravitational force that depends on distance linearly rather than on its square. The remote galaxies then dominate, and we can calculate the magnitude of the forces they create by summing their contributions, taking relativity into account and using the known average galactic density in the observable universe. It turns out that within a factor of 10 or so (it is too small) this analysis roughly accounts for what happens in this simple experiment. A factor of 10 may not seem like good agreement; but considering that the universe contains some 10 11 galaxies spread over a volume billions of light-years in radius, there are so many huge exponents flying around in the calculation that agreement within a single factor of 10 is pretty impressive. In any case, according to current knowledge we can see only a small fraction of the matter in the universe, so the discrepancy is in the right direction. Mach may be right, therefore. But there is a sting in the tail. If gravity indeed has similar properties to electromagnetism, then in analogy with radio waves, gravitational waves must also exist. They would be exceedingly weak, of course—the effects they produce might be roughly 1038 times weaker than radio waves—but none have yet been seen. Indeed their detection might be beyond the capabilities of current technologies. See Chapter 9.

68

THE GOLDEN AGE OF PHYSICS

1916, Sommerfeld discovered how to incorporate relativity into the analysis and to quantify accurately the rates at which elliptical orbits precess. Thus, electronic states could now be characterized by three quantum numbers rather than Bohr’s original one, making possible new ranges of electronic transitions between them, a revision that accounted for the observed “fine structure” in the hydrogen spectrum. Bohr was impressed and wrote to Sommerfeld (Pais 1993, p. 188): I do not believe ever to have read anything with more joy than your beautiful work.

Following his “shock,” Pauli accepted Bohr’s invitation to spend a year with him in Copenhagen. Although he made a “serious effort” while he was there to explain the anomalous Zeeman effect—the anomalously high line splittings (fine structure) induced in the spectra of elements other than hydrogen—he was not successful. Soon after his return to Hamburg he turned to the problem of accounting for the ways successive electronic orbits are filled as atomic number increases—how to explain, for example, the dominance of the mysterious sequence—2, 8, 18—the maximum numbers of electrons found in the first few energy states of high atomic weight atoms. Drawing on the work began in Copenhagen, he proposed a fourth quantum number—later called “spin”— that could have two values but which therefore would double the number of electrons allowed in these states for each value of n—the principle quantum number Bohr had introduced for circular orbits. That number would previously have been n2, but Pauli’s new idea doubled it to 2n2. He had thereby neatly accounted for the mysterious 2-8-18 sequences when n has the values of 1, 2, and 3 respectively. But why should electrons respect this formula? What prevented them all from occupying the same state? Perhaps drawing inspiration from Bohr’s boldness in a similar situation, Pauli, a 25-year-old lecturer, asserted as a new law that no two electrons are allowed to occupy the same quantum state. He claimed that once an electron moves into a state with specific values for energy and spin, its status as king and sole occupant of that particular castle is absolutely and inviolably guaranteed. This is the famous Exclusion Principle, published in 1925, for which Pauli won the Nobel Prize in Physics in 1945. It is now a major pillar of all successful atomic theories. Figure 6 is a photo of Pauli taken a year later, together with Linus Pauling and Werner Kuhn. Many physicists found difficulty in accepting the proposed new quantum number since it apparently had no meaning or significance; it merely solved a mathematical puzzle. Luckily, that gap was soon filled by an idea from two young Dutch physicists, George Uhlenbeck (1900–1988) and Samuel Goudsmit (1902–1978), also published in 1925 while they were still graduate students at the University of Leiden. It allocated a quantity called spin to electrons, a quantized angular momentum that in units of h/2π, where h is the ubiquitous

THE GOLDEN AGE OF PHYSICS

69

Figure 6 Photograph taken in 1926. From left to right: The American chemist and Nobel Prize winner Linus Pauling, the Austrian physical chemist Werner Kuhn, and Wolfgang Pauli standing on the deck of a boat during a European trip. (Photographer unknown.)

Planck’s constant, would have the values of ±½. For the neutral hydrogen atom, for example, it means that an electron’s spin in the ground state can be aligned either parallel or anti-parallel to the nuclear spin and that hydrogen, therefore, should have two allotropic forms, ortho- and para-hydrogen. This indeed is what is observed, although the story took time to emerge. Jan Oort (1900–1992), the director of the University of Leiden’s Observatory during the wartime German occupation of The Netherlands, aimed to boost morale by encouraging theoretical studies among staff and students, many of whom were there illegally having escaped detection by the Germans. Seminars were held secretly in the observatory basement even though he knew that such actions placed him in serious danger. At one of these covert gatherings, H. C. Van de Hulst predicted in 1944 that the allotropes of neutral hydrogen would be minutely separated in energy and that transitions between them should be observable at a wavelength of 21.06 cm. They hoped to be the first to see them when their radio astronomy studies could be resumed; but, unfortunately, their microwave receiver was destroyed by fire shortly after the end of the war; and the credit for observing this important transition went to Ed Purcell from Harvard University in 1951, whom we will refer to again for his work on magnetic resonance imaging (see Chapter 7). A full quantum mechanical analysis reveals that the 21-cm transition is actually “forbidden,” a

70

THE GOLDEN AGE OF PHYSICS

somewhat misleading terminology as it means that it can occur but only with considerably reduced probability; in this case, the transition has a half-life of some 12 million years (Longair 2006, p. 218). However, hydrogen is so abundant that this very low probability transition can easily be seen. Indeed, thanks to the Doppler effect, it is now used routinely to analyze velocity distributions in our own and other galaxies and has made substantial contributions to cosmology. One of the presumed absolutes before or since those times has been that energy is always strictly conserved. However, studies on radioactive decay started before the First World War by Lise Meitner and Otto Hahn had indicated that this might not always be true. If it were, ß-particles emitted by a radioactive nucleus should be mono-energetic, but they were observed with a continuous range of energies up to a certain maximum. Bohr’s preferred solution was that energy might only be conserved on average, but Pauli, inspired by the gravity of the problem made an outrageous, “Nothing venture, nothing win” proposal (Enz 2002, p. 215) and suggested that a neutral but very light particle was being emitted unobserved along with the ßs and was sharing the decay energy with them—hence, the ßs continuous spectrum. He called these new particles “neutrons.” Confusingly perhaps, two years later, James Chadwick discovered the existence of a new heavy electrically neutral particle that was being emitted by the nucleus. That, too, was called the neutron, for which discovery he won the Nobel Prize in Physics in 1935. Pauli’s new particle was appropriately renamed the neutrino shortly afterward by the Italian physicist, Enrico Fermi. It is not surprising that neutrinos were not observed at the time; we now know that they can penetrate light-year thicknesses of lead without a significant chance of an interaction. Indeed, they were not positively identified until 1956 when two American physicists, Frederick Reines and Clyde Cowan, saw them in a tour-de-force experiment for which Reines won a share of the Nobel Prize for Physics in 1995 (Cowan died in 1974). Neutrinos are now known to be by far the most abundant particles in the universe, exceeding photon numbers by a factor of ∼109. Werner Heisenberg Werner Heisenberg was born in 1901 in Würzburg but grew up in Munich where his father was professor of Greek at the University of Munich. He studied at Munich’s Maximillian School until 1920 when he enrolled at the university to study theoretical physics with Sommerfeld. Heisenberg chose the stability of laminar flow in liquids for his PhD studies, which he completed in 1923. Over the next few years he intermittently commuted between two of the best places in the world—the universities of Göttingen and Copenhagen— from which to keep in touch with the most exciting developments in other fields. Max Born (1882–1970) was at Göttingen and was a German-born theoretician who, like Sommerfeld, had the legendary capacity for attracting and

THE GOLDEN AGE OF PHYSICS

71

inspiring the brightest young people to work with him. One was Heisenberg, and another was Erwin Schrödinger (1887–1961). Graduating from Munich in 1924, Heisenberg went at Bohr’s invitation to Copenhagen, supported by funds from John D. Rockefeller’s International Education Board and funds from Carlsberg, the famous Danish beer brewer. Following a brief sojourn in Göttingen, he returned to Copenhagen as a lecturer in theoretical physics until 1927, giving his lectures in Danish or English, languages he had tasked himself to learn in whatever time he could squeeze from thinking about quantum mechanics! At the beginning of the 1920s, Planck’s quantum hypothesis, by which light waves of frequency ν have energy hν, was generally agreed. Einstein’s assertion that light quanta have a momentum hν/c (where c is the speed of light) was also well supported by experiment, thereby reviving the corpuscular theory of light for certain phenomena. Indeed, Einstein had won the Nobel Prize for Physics in 1921 “for his services to theoretical physics and especially for his discovery of the law of the photoelectric effect.”* But, was light a wave or a particle? Surely it cannot be both. Endless debates on this apparent duality raged in labs all over the world; but suddenly an answer emerged: Well, yes it can be both: indeed it must. Prince Louis de Broglie (1892–1987), the French physicist who happened to be an aristocrat, had trained as a medieval historian at the Sorbonne in Paris before Einstein’s work on relativity inspired him to take up physics. But the war intervened and he was conscripted into the French army. Luckily for him, he was ordered to work on radio communications based in a lair in the Eiffel Tower and was not required to suffer the appalling fate inflicted on Harry Moseley and many others. But he had not forgotten Einstein; and in 1923, while still a graduate student, he had a “sudden inspiration.” As he says (Clark 1982, p. 321): Einstein’s wave particle dualism was an absolutely general phenomenon extending to all physical nature, and, that being the case, the motion of all particles, photons, electrons, protons, or any others, must be associated with the propagation of a wave.

Thus he formally proposed that all particles were guided by “matter waves” (soon to be called de Broglie waves). For electrons moving in atomic orbits, their wavelengths must be an integral fraction of the orbital circumference— that is, an electron’s orbits and its energy therefore must be quantized! His startling proposal was pure conjecture; but within weeks, he had published another paper explaining how it could be put to the test. Electrons, he suggested, should be diffracted by slits or crystals say in precisely the same ways * Einstein received the Prize in 1922, the same year for which Bohr received his. No Prize was awarded for 1921; but, in accordance with the Nobel Foundation statutes, the Prize could be held over for 1 year.

72

THE GOLDEN AGE OF PHYSICS

that X-rays are, a prediction that was soon confirmed. Thus, just as Einstein’s relativity theories had unified the classical concepts of space and time, and showed that these entities were not independent but should be considered coherently as space-time, so de Broglie had proved that the classical concepts of particle and wave were for quantum scale objects intrinsically complementary. An object can therefore be considered to be either a particle or a wave; it makes no difference to the outcome. In 1929, de Broglie won the Nobel Prize for Physics for this remarkable discovery, but sadly, Einstein was never so honored for his work on relativity. The revolution continued. Erwin Schrödinger extended these studies by proposing that de Broglie’s eponymous waves could also be considered as a standing electron wave, in analogy with the harmonics of a vibrating string in a musical instrument. This work led to his now famous Schrödinger equation for electron behavior that involved a wave function, now universally represented by the iconic symbol ψ—the Greek letter psi—and the mathematical methods of partial differential equations. Meanwhile, Heisenberg was engaged on a somewhat different tack. When he was 22 and working in Göttingen, he suffered from a severe bout of hay fever, and seeking relief left for a few days’ walking and climbing holiday on Helgoland Island in the North Sea. While there, he conceived the idea of matrix mechanics, a version of quantum theory based exclusively, as he put it, on concepts that are physically observable, that is, on particles; whereas, as he said, orbital waves are not. When he returned to Göttingen, he showed it to Born, who in collaboration with his student Pascual Jordan revised it (there were also contributions from the British physicist Paul Dirac in Cambridge, another wunderkind—he was 8 months younger than Heisenberg), all of which Heisenberg approved. For a brief period at the beginning of 1926, therefore, it looked as though there were two self-contained but quite distinct systems for quantum mechanics: matrix mechanics and wave mechanics. But Schrödinger soon demonstrated that they were completely equivalent and led to the same results. Indeed, Heisenberg, using Schrödinger’s approaches rather than his own, together with Pauli’s exclusion principle and spin concepts, had succeeded in 1926 in interpreting at least approximately the spectrum of neutral atomic helium, which knotty many-body problem had long been a source of great difficulty. In general, however, wave mechanics grew in popularity, perhaps because the techniques of matrix mechanics were harder to learn, while wave mechanics, as Schrödinger had intended, made it possible to retain a vestige of the old classical images. A little later when working in Copenhagen, Heisenberg, who was then 26 years old (see Figure 7) made the discovery for which he is most famous—the Uncertainty Relations or Uncertainty Principle. He had long thought about how one might examine an electron in an atomic orbit. The radius of a hydrogen atom, for example, is about 10−8  cm, and to “see” an electron one must use radiation of roughly the same wavelength as its orbital radius; that is, high energy γ-rays. But such rays are so energetic that they will almost certainly

73

THE GOLDEN AGE OF PHYSICS

eject an electron from the atom. Light rays would not do that; but they would be useless, of course, since their wavelengths are far too long. One might as well try to measure the diameter of a sand grain with a meter stick. Thus, all attempts to measure an electron’s momentum or position will strongly affect the result. This is a completely classical argument but the dilemma can be expressed precisely using quantum mechanics, which is what Heisenberg did. He proposed that if Δx is the uncertainty in a particle’s position introduced by the radiation that is used to measure it, and if Δp is the corresponding uncertainty in its momentum, then the mathematical product of the uncertainties must be greater than or equal to Planck’s constant. Thus:

∆x∆p ≥ h

where h is Planck’s wonderful constant, and ≥ means greater than or equal to. He also estimated the uncertainties to which a particle’s energy E can be measured at time t, and their relationship is:

∆E∆t ≥ h

Figure 7 Photograph of Werner Heisenberg taken in 1927 at the Solvay Conference. (Source: Wikipedia.)

74

THE GOLDEN AGE OF PHYSICS

It is important to note that these relations and uncertainties do not arise from any technological deficiencies or errors of measurement. They are absolute, and Planck’s quantum specifies the intrinsic accuracy with which such measurements can be made. They are expected to apply anywhere in the universe. Shortly afterward, Heisenberg won the Nobel Prize for Physics in 1932 (a prize that was actually awarded in 1933) “for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen.” Schrödinger’s contributions were recognized at roughly the same time, being awarded the Nobel Prize for Physics in 1933, sharing it with Paul Dirac “for the discovery of new productive forms of atomic theory.” Bohr’s and Heisenberg’s approaches to physics could hardly have been more different. Bohr was ever searching for qualitative interpretations that would satisfy his intuition, whereas Heisenberg’s mathematical approaches were more austere. Bohr also disagreed intensely with Schrödinger’s attempts to interpret quantum mechanics as if it were merely a branch of classical mechanics; Schrödinger was scornful of the idea that transitions between states were governed solely by probabilities; and Einstein’s unwillingness to accept that “God plays dice” is legendary. One can get an impression from their biographies of the extensive prolonged and passionate discussions at Solvay and other meetings (see Figure 8), during long country walks, or late into the night as they agonized over whose interpretation had the most merit. One also comes to realize that most of the key discoveries in the century’s first few decades came from the inspired and informed guesses of young people, many in their twenties, whose speculations had to be justified eventually but that nevertheless they could not provide at the time (see Table 4). Nowadays, under the current funding rules, young academics can rarely find opportunities for leadership and must work at the bidding of their elders and betters, arrangements we should expect cannot do other than curtail creativity when it is at its highest levels.* But on reading about the frequent and often heated debates of these youthful pioneers, one can also appreciate that they were also permeated by a profound sense of mutual trust and respect for each other’s views; they all remained the closest of friends although no “right answer” was on the horizon. Nevertheless, despite all this apparent discord, quantum mechanics emerged as one of the century’s most successful and influential theories. Linus Pauling (see Figure 6), a contemporary of Heisenberg, was the first to apply quantum mechanics outside physics. He revolutionized understanding of chemical structure, reactivity, and bonding, for which work he won the Nobel Prize for Chemistry in 1954, and his theoretical work has underpinned a vast * In 2008, the average age at which scientists won their first grant from US National Institutes of Health, the world’s biggest research funding agency, was 43, an average age that is expected to increase in the future. No such grants were made to scientists under 30. (Source: Science, January 25, 2008, p. 391.)

THE GOLDEN AGE OF PHYSICS

75

Figure 8 The Solvay Conference, October 1927. This photograph, one of the most iconic in science, features a Who’s Who of the most prominent scientists of the Golden Age, except that Rutherford and W. H. Bragg did not attend. Back row, left to right: A. Piccard, E. Henriot, P. Ehrenfest, E. Herzen, Th. de Donder, E. Schrödinger, E. Verschaffelt, W. Pauli, W. Heisenberg, R. H. Fowler, L. Brillouin. Middle Row: P. Debye, M. Knudsen, W. L. Bragg, H. A. Kramers, P. A. M. Dirac, A. H. Compton, L. de Broglie, M. Born, N. Bohr. Front Row: I. Langmuir, M. Planck, Marie Curie, H. A. Lorentz, A. Einstein, P. Langevin, C. E. Guye, C. T. R. Wilson, O. W. Richardson. Between 1902 and 1954, fifteen of the delegates to this conference won the Nobel Prize in Physics and three won it in Chemistry. (Source: Wikipedia.)

variety of profitable developments in chemistry and biochemistry. Quantum mechanics enabled the semiconductor, telecommunications and computation industries, of course; indeed it is difficult to name a technology-based industry today whose development has not been touched by the early work of these young pioneers and many others unnamed here. Conclusions My brief and incomplete outline has covered the stories of some of the most influential pioneers of the new physics a century or so ago, work that laid the foundations for a vast number of unpredicted discoveries throughout the

76

1894–1974 1882–1987 1900–1958 1900–1988 1902–1978 1901–1954 1902–1984

1887–1961 1901–1976 1891–1974 1901–1994

Satyendra Nath Bose Louis de Broglie Wolfgang Pauli George Uhlenbeck and Samuel Goudsmit Enrico Fermi Paul Dirac

Erwin Schrödinger Werner Heisenberg James Chadwick Linus Pauling

Zürich Copenhagen Cambridge Caltech

Florence Cambridge

1925 1925 1926 1927 1932 1932

Dhaka Sorbonne Hamburg Leiden

Glasgow Copenhagen

Cambridge Cambridge Bern

Location

1923 1924 1925 1925

1912 1913

1897 1899 1905

Year

Fermi-Dirac statistics: Fermions The fundamental equations of quantum mechanics Electron wave equation Uncertainty Principle The neutron The nature of the chemical bond

The electron α-rays and β-rays Photo-electric effect Special Relativity Isotopes Hydrogen spectrum: Stationary states Bose-Einstein statistics: Bosons Wave-particle duality Exclusion Principle Electron spin

Discovery

Each scientist “lost” some 4 years from his research career because of the Great War.

1877–1956 1885–1962

Frederick Soddy Niels Bohr

a

1856–1940 1871–1937 1879–1955

Life

J. J. Thomson Ernest Rutherford Albert Einstein

Name

TABLE 4.  The Discoveries Made by Exuberant Youth

38a 26 41a 31

31 42a 25 25 23 24 23

35 28

41 28 26

Age at Time of Discovery

THE GOLDEN AGE OF PHYSICS

77

remainder of the century. Apart from humanity’s intellectual enrichment, billions now enjoy higher standards of living as a result. New technologies always bring downsides, of course—progress is never risk free—but the net effects have been strongly positive. Today, physics is at a similar pass to that faced at the turn of the twentieth century. Nowadays, the so called “Standard Model,” by which neutrons and protons have been shown to be composite particles made up of quarks bound together by forces mediated by another set of particles called gluons, has been very successful in explaining many aspects of particle physics at the most fundamental levels. But a large number of profound and intractable problems remain, and who knows what new opportunities solutions to them will reveal? Quantum mechanics cannot be reconciled with gravity. The natures of the recently discovered “dark energy” and “dark matter,” which together are thought to comprise some 95% of the observable energy/matter in the universe, are unknown; indeed it is not even certain that these ideas are real or will survive, and are not masking other areas of ignorance such as, for example, the possibility that gravity might be modified over vast distances. David Gross, the American theoretical physicist who won the Nobel Prize in physics in 2004, was quoted by another American, Lee Smolin (Smolin 2006, p. xv), as saying at a conference recently: The state of physics today is like it was when we were mystified by radioactivity. . . . They were missing something absolutely fundamental. We are missing perhaps something as profound as they were back then.

The great hope for physics today is called “string theory,” but that hope has been unfulfilled for more than three decades during which time it has not succeeded in making a single prediction that can be checked by experiment. This is a very serious drawback for a theory; indeed, it is strictly a theorist’s theory. Nevertheless, as Smolin points out (Smolin 2006, p. xxiii): I want to emphasize that my concern is not with string theorists as individuals, some of whom are the most talented and accomplished physicists I know. I would be the first to defend their right to pursue the research they think is most promising. But I am extremely concerned about a trend in which only one direction of research is well supported while other promising approaches are starved. . . . It is a trend with tragic consequences if . . . the truth lies in a direction that requires a radical rethinking of our basic ideas about space, time, and the quantum world.

This situation is an inevitable consequence of policies dominated by national governments’ and funding agencies’ recently acquired penchant for setting research priorities that seek to maximize efficiency of resource use and foster perceived national strengths. Moreover, they seem to take every conceivable step to ensure rigorous adherence to these policies. Priorities can only be drawn, of course, from à la carte menus of today’s portfolios of understanding. However, that understanding is seriously incomplete, and not only in physics.

78

THE GOLDEN AGE OF PHYSICS

What they should be doing in these circumstances is providing modest backing for the undirected efforts of qualified academics, particularly the young, in whatever enterprise they think is worthwhile for as long as it takes. That policy has been proved successful time and again, especially if we can also ensure that academic environments tolerate the rebellious, the iconoclastic, and the eccentric in general. The current policies apply in every field of scientific endeavor. To paraphrase Winston Churchill, this should be a situation “up with which we will not put.” Surely we must find a new way.

5 Oswald T. Avery: A Modest Diminutive Introverted Scientific Heavyweight

It is probably impossible to unknow something. One can forget, but that is often only a temporary phase. One can be ordered to disregard as when, say, evidence given in a court of law is deemed inadmissible; but it has been heard, and any intended benefit or damage has probably already been achieved. It is therefore difficult to imagine today the state of understanding that prevailed in the 1920s when Avery first embarked on his career as an independent researcher, and it is difficult to put aside what we know of the vast compendium of subsequent discoveries and developments. As outlined in Chapter 3, advances in physics during the first few decades had been little short of miraculous; and we had moved from “classical” theories, which in principle allow systems to be determined with arbitrary accuracy and precision, to a world dominated by probabilities and uncertainty. This was not the terrifying uncertainty that humanity had always had to cope with throughout its existence, but an uncertainty arising from intrinsic and unchanging properties of the universe that place strict and fundamental bounds on the accuracy to which measurements can be made and on the knowledge that can be deduced from them. Atomic theory—that is, the revolutionary idea that atoms were real and tangible objects, albeit very small, and not just theoretical constructs to help with analysis—had finally been accepted. We had taken the first steps toward understanding atomic and nuclear matter. Radioactivity had been discovered. Theories of electricity and magnetism had been unified. Special and general theories of relativity had been triumphantly confirmed. Biology had not kept pace, however. There had been little new understanding since the days of Charles Darwin (1809–1882) and his theories of evolution, Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

79

80

OSWALD T. AVERY

and Gregor (Johann) Mendel (1822–1884), whose posthumously recognized work decades after his death laid the foundations of genetics. Neither of them did their great works as academics. Darwin could be described as a “gentleman” scientist and searcher after truth who spent some 20 years carefully analyzing the vast amounts of data taken during his epic 5-year global cruise on HMS Beagle, and, of course, who went on to produce in 1859 one of the most famous publications in science: On the Origin of Species. Mendel was a priest who took the name Gregor when he joined the Augustinian Abbey of St. Thomas in Brno (now in the Czech Republic). He did his pioneering work on genetics using the garden pea as a model system, peas he personally cultivated in the monastery’s huge garden. His results, published in German in 1866, achieved “instant oblivion” according to Jacob Bronowski.* They were ignored for years perhaps because they were published in German in an obscure journal and his work did not carry the imprimatur of a great university. They also did not conform to the prevailing theory of “blending inheritance,” a term that refers to the hereditary mixing of maternal and paternal elements in offspring that apparently produces features that are an average of parental traits (height, intelligence, hair color, etc.). It was not so much a theory as an accumulation of short-term experience, but it was so commonly accepted as to be ubiquitous. However, Darwin despite his enormous reputation also had to struggle for years to escape its popular superficiality, and it is a pity he did not know of Mendel’s work.† However, blending inheritance obviously flies in the face of experience. Homo sapiens has been around for more than a thousand generations; and, according to blending inheritance, the traits of each generation should have gradually converged; and our species should now consist almost entirely of identical clones! Darwin prevailed, of course, but unfortunately Mendel did not live to see his work accepted or even widely discussed. It was resuscitated in 1900 and, as we shall see, went on to inspire many others. As Jacques Monod (1910–1976), a Nobel Prize winner from the Pasteur Institute in Paris and the Salk Institute in California, wrote: Mendel’s defining of the gene as the unvarying bearer of hereditary traits, its chemical identification by Avery, and the elucidation by Watson and Crick of the structural basis of its replicative invariance, without any doubt constitute the most important discoveries ever made in biology. ‡ [Author’s note: Mendel did not specifically refer to genes; he used the terms “factors” or “elements.”]

Naturally, therefore, the great scientific questions of the 1920s revolved around heredity, and almost invariably, human heredity. Thus, they concentrated on mathematical analyses of the related characters of cousins and twins, * Bronowski (1981), p. 240. Bronowski was a pioneer of scientific history and presenter of the hugely successful BBC TV series, The Ascent of Man, which ran in the 1970s. † An excellent review of Darwin and blending inheritance is given in Vorzimmer (1963). ‡ Bronowski (1981), p. 247.

OSWALD T. AVERY

81

and sought to answer such burning questions as whether racial theories had any scientific foundations, questions that would persist for decades to come. However, it was widely accepted that the gene was the indivisible “atom” of hereditary transmission and that genes were organized into chromosomes within living cells; but like atoms, genes could not be seen directly even using the most powerful microscopes, so they were a somewhat nebulous concept and their existence had to be inferred. Thus, shapes characterized chromosomes—nothing was known about how they operated. Mutation, for example, which is now known to play a vital role in evolution, was still referenced in the 1920s using a phrase coined by the pioneering Swedish biologist Carl Linnaeus (1707–1778) almost two centuries earlier when working on plant hybrids. Linnaeus, a deeply religious man, believed, in common with many others, that all living things—plants as well as animals—had been created at some divine moment in the precise profusion in which we see them today. But plant hybrids could not possibly fit into this conservative picture. In one of the most heroic tasks in scientific history, he had set out to categorize all the living organisms known at the time, a task that working at his usual frenetic pace was to occupy him for 25 years. He devised a classification scheme based on a single Latin name to indicate the genus and another that could be used as shorthand for the species—Homo sapiens, for example. The binomial system, as it came to be called, was first established for the plants in 1753 and, 5 years later, for animals. That is still used today, with some modifications. However, he refused to allow the existence of hybrids to deflect him, dismissing them jokingly as, “the work of Nature in a sportive mood.” His observation remained unexplained, and indeed mutations were called “sports” until well into the 1930s. Biochemistry, a copious source of subsequent advances, had not yet been discovered as a discipline; its precursor, physiological chemistry, was concerned with extracellular chemistry, and such general studies as the chemistry of digestion and body fluids. Oswald T. Avery (1877–1955), a slightly built wisp of a man, was the son of a clergyman and was born in Halifax, Nova Scotia. He moved with his family to New York City 10 years later, becoming a US citizen in 1917. Completing his medical training at the College of Physicians and Surgeons of Columbia University in 1904, he moved to the Hoagland Laboratory, a private lab founded by a Brooklyn physician Dr. C. N. Hoagland (1828–1899). Hoagland was also an entrepreneur and had founded the Royal Baking Powder Company from which he made a sizable fortune. When his grandson died of diphtheria, a bacterial infection (at that time a scourge of children in particular), he used his fortune to create in 1887 his eponymous laboratory and dedicate it to the study of bacteriological research. Here, Avery “learned bacteriology and learned research,” as he later acknowledged. The small lab’s finances were always precarious; and not surprisingly, in 1913 Avery accepted a move to the newly established, prestigious, and almost certainly well-funded Rockefeller

82

OSWALD T. AVERY

Institute for Medical Research* at a salary of $2000 per year. He was invited to work on lobar pneumonia, the disease from which his mother had died, and indeed he devoted the rest of his long scientific career to understanding the pathogenesis of infectious disease. As René Dubos, Avery’s biographer and former close colleague has explained, the general view among American physicians during the 1870s was that laboratory science could never contribute anything of practical value (Dubos 1976, p. 14.). Quoting the Harvard medic, Dr. Henry J. Bigelow: We must justly honor the patient and the learned worker in the remote and exact sciences, but we should not for that reason encourage the medical student to while away his time in the labyrinths of Chemistry and Physiology, when he ought to be learning the difference between hernia and hydrocele.

Thus, the only worthwhile form of medical science was assumed to be the knowledge acquired by observation at the bedside. Such notable scientific pioneers as Louis Pasteur (1822–1895) in France and Robert Koch (1843–1910) in Germany slowly changed this attitude, but there were very few centers of medical research in the US. However, in 1873, the rich Baltimore merchant, Johns Hopkins, left $7 million for the founding of a new research university, a hospital for the investigation of disease, and a medical school linked to both; and progressively, the American climate slowly began to change. John D. Rockefeller made his huge fortune from oil and in real terms was one of the richest people ever, including up to the present day. At the turn of the century, there were, of course, no antibiotics, and there was a constant fear of childhood disease and the “microbial agents” that cause them. In 1901, Rockefeller’s eldest grandchild died from scarlet fever, a tragedy that galvanized him as it had Hoagland; and he finally agreed to found the institute he had been considering for some years. It was to be dedicated to improving health through purely disinterested scientific inquiry, a surprisingly altruistic and visionary decision. Rockefeller did not require that the Institute should concentrate on any specific disease, such as scarlet fever, for example, as philanthropists tend to do nowadays, but should be as wide ranging as possible. His long-trusted adviser on philanthropic matters, Frederick Taylor Gates, had been impressed by a textbook written by the very influential William Osler, a Johns Hopkins professor of medicine who later was appointed the Regius Professor of Medicine at Oxford University and subsequently knighted. Osler had said: All that medicine up to 1897 could do was to nurse the patients and alleviate to some degree the suffering. Beyond this as a science medicine had not progressed.

There has been huge progress since then, of course, but similar statements on some diseases can still be made today. My younger sister Jean died a few * In 1965, the Institute changed its name to The Rockefeller University.

OSWALD T. AVERY

83

years ago from Alzheimer’s; and all the medical profession could do, more than a century after Osler’s honest appraisal, was to make her as comfortable as possible, while from the moment of diagnosis we relatives watched and waited with mounting horror and anguish as her condition deteriorated and she slowly and inevitably slipped away. However, Gates was so inspired by Osler’s book that he recommended that Rockefeller set up an institute of medical research in the United States, where qualified researchers (he used the word “men” of course, as was the custom then!) could give themselves up to uninterrupted study and investigation, on ample salary, entirely independent of medical practice. Rockefeller agreed and insisted that the institute’s investigators should be given complete intellectual freedom and, furthermore, that the institute itself should be free from the doubtless well-intentioned influences of other universities and hospitals. He clearly understood how science works, therefore! He chose what was then a rural site, a few miles from New York City center, a strictly utilitarian and functional architectural style with materials of the highest quality. Its buildings are still in use today. There would be no disciplinary departments in the usual academic sense, so there would be no aggrieved toes accidentally to step on that might prevent researchers from exploring as freely as they wished. Scientists would be selected for what they had done or what interested them rather than for the expertise they might bring. Such freedom was not then entirely unknown elsewhere, of course, but the Rockefeller University has generally preserved it to the present day. It is very difficult to imagine a scientist seemingly less likely than Oswald Avery to make discoveries that would rock the scientific world to its foundations and change it forever (see Figure 9). Avery seemed to cultivate obscurity. When he was in the lab, he wore his customary tan lab coat, the conventional apparel usually worn by technicians and other support staff. He would wear a white coat if a VIP was expected, but perhaps he thought that white might be presumptive when conducting experiments with Nature. His contemporaries record that he was the mildest, unassuming, and most gracious of men with a dogged determination, universally and affectionately known as “The Professor” or simply “Fess” even long after he had finished teaching. All he ever wanted, apparently, was to be allowed to get on quietly with whatever job he had in hand without fuss. But surely radical discoveries need panache and flair? Yes, they usually do, but they do not necessarily have to be accompanied by fanfares of trumpets. Dubos reports that Avery was a late starter in science. He had been appointed to the Institute when he was almost 36 years old, and there had been nothing to indicate that he would be making major contributions to the biomedical sciences shortly. Dubos says (Dubos 1976, p. 70): One might assume that the profound change in Avery’s research style that began in 1916 was simply the consequence of his being provided with generous budgets and elaborate resources for experimental work, but this is not the case. . . . Avery

84

OSWALD T. AVERY

Figure 9 A photograph of Avery taken in 1937. (Source: Wikipedia.)

never had a large laboratory at the Institute, and he was always extremely frugal in the use of his research facilities. He rapidly developed into a creative scientist not because he was provided with funds and technical help, but because the Institute provided an intellectual and human atmosphere that suited his temperament.

This enlightened approach to science policy is very different from the modern trend of searching out small numbers of superstar scientists and giving them large sums of money to tackle the problems that consensus deems to be the most important of the day. The Venture Research project I started in 1980 was sponsored by BP and ran throughout the decade; it also showed that large sums of money and the most expensive equipment are not necessarily essential for world-class transformative research (Braben 2008, p. 145). We also found that mutual trust and total freedom are crucial. But the myth persists that research strategies based on selecting and lavishly funding big-bang projects led by widely acknowledged leaders are the best ways forward. Infectious disease has long been a scourge. Influenza, a viral infection, is still a major threat. More people died from the Spanish flu epidemic of 1918– 1919 than from the fourteenth century Great Plague or the prolonged carnage of World War I. Avery had turned his attention to lobar pneumonia and the

OSWALD T. AVERY

85

infection arising from the bacterium Streptococcus pneumoniae. However, he was not concerned with the treatment of the disease’s symptoms, as was fashionable among medics at the time, but to discover its causes. Bacteria are the simplest known living organisms, but they are still complex and have three distinct outer skins. The outermost skin, a fuzzy layer composed of polysaccharides, is called the cuticle; the next layer is the cell wall that determines the cell’s shape; and inside that is an inner layer, the plasma membrane, which controls a cell’s metabolism. Avery had quickly discovered that virulence was determined by the polysaccharides in the cuticle, that different strains of pneumococci bacteria had different polysaccharides, and that bacterial strains could be distinguished using the lock-and-key properties provided by specific antibodies. In 1923, Fred Griffith, a UK government scientist and a contemporary of Avery, had published news of a remarkable discovery. Experimenting with mice, he first found the obvious result that the pneumococci that had lost their capsules and therefore could not be virulent—which he called type R—indeed did not kill mice when they were injected with it. In another experiment, he found that heat-killed but fully intact pneumococci of a different strain— which he called type S—also did not kill. However, he then made the astonishing discovery that an S and R mixture injected into a mouse caused its death, and an autopsy performed on the unfortunate mouse showed that it had died of lobar pneumonia caused by being infected with type S pneumococci, the very type he had heat-killed! He had also found later that the harmless R form could be transformed into the deadly S form in a test tube (that is, in vitro) so the mouse itself had played no role, which led him to postulate that the S and R variations are caused by reversible mutational changes in the bacteria. Dubos relates that Griffith’s spectacular results were discussed in Avery’s department, in which Dubos was now working (Dubos 1976, p. 136.): But we did not try to repeat them, at first, as is we had been stunned and almost paralyzed by the shocking nature of the findings.

Not surprisingly, therefore, Dubos reports that Avery was initially very skeptical. His own research had convinced him that bacteria are stable biological entities that do not change their spots, and he firmly believed in the fixity of immunological types. Even in England, Griffith’s result was not widely accepted. But although Avery had neither met nor corresponded with Griffith, he knew of his reputation. He therefore encouraged his young physician colleague, M. H. Dawson, to repeat the experiments. They were successful, and convinced Avery that pneumococci could indeed undergo transmissible changes in immunological specificity. But what was causing the transformation? Shortly afterwards, Dawson left for another lab, and Avery entirely on his own initiative decided to devote the entire resources of his lab to explaining Griffith’s bizarre and astonishing discovery.

86

OSWALD T. AVERY

Nowadays, a modern Avery at a conventional university would have to seek permission from his peers before he would be able to commit substantial resources to a similar problem. But few people believed Griffith’s results, and in such circumstances today such an application probably would be unsuccessful because Avery’s peers, who did not share his dedication and personal involvement in the problem, probably would have thought that there were higher priorities. However, the replacement for Darwin in Avery’s group, J. L. Alloway, soon made another unexpected and important discovery. The transformation induced in the (killed) S-type cells did not require the entire intact bacterial cell: after dissolving the S cells, the transformation could be induced by a soluble fraction of them, which he was also able to isolate using chemical means. Thus he obtained “a thick syrupy precipitate” of the transforming agent; and so it turned out, therefore, that Alloway was the first person to isolate the fibrous substance that 10 years later would be identified as deoxyribonucleic acid, DNA. An ability to reproduce results in experiments is essential in any scientific field, but in biology it is a recurring nightmare. All biological systems and indeed life itself are extremely complex and interrelated, and it is very difficult indeed to isolate a specific component for rigorous study. Thus, it might often happen that one might make an impressive measurement but no one is able to repeat it. This was the case with Alloway’s discovery; and it took some years of patient experimentation for Colin MacLeod, who joined Avery in 1935, to develop the procedures that would lead to reproducible results. The results eventually showed, however, that the transforming substance was not a protein, as its molecular weight was far too large: it was estimated by Maclyn McCarty to be in the range 0.5–1.0 million Daltons, so the indications were that it was DNA. It might be difficult today to fully appreciate the significance of what Avery’s team had done because DNA is now one of the most well-known molecules in science. However, until even the early 1940s DNA was still regarded as little more than a repetitious assemblage of phosphates, sugars, and nitrogen bases that had no obvious role in biology. Expert opinion was that genes were made of special types of protein molecules, and DNA was not thought to have the intrinsic complexity to support that function. Furthermore, Avery and his colleagues* had found that their effect was type specific, which meant that every pneumococcus bacterium had to have its own “personalized” DNA to act as the genetic bearer of immunological specificity, a conclusion that again was not supported by experts on DNA structure. The latter indeed thought that Avery’s results were being caused by an as-yetunidentified contaminant. Avery was not unsympathetic to that view. He remembered, of course, that he had once thought that pneumococcal virulence was caused by the polysaccharides in the cuticle, so he was wary of exposing * Dubos reported that Avery’s team was usually small, often consisting at any given time of only one or two scientists plus two technicians.

OSWALD T. AVERY

87

his group to further skepticism. Before publishing his conclusions he circulated them widely to colleagues he could trust to tell him if they thought he had made an obvious mistake. The final paper was submitted in November, 1943, and published in February, 1944 (Avery 1944). He could not exclude all doubt, however, and the paper acknowledges that there was a possibility that the transformation may be caused by minute amounts of some very active protein that had escaped detection. Thus, when Avery was 67, an age when convention nowadays supposes that all creativity has been worn out, he published results of more than 15 years’ systematic attempts to identify the chemical nature of a substance that seemed to change heritable properties in bacteria. In the light of subsequent developments, Avery’s discovery was one of the most important of the twentieth century. However, it was greeted initially with indifference and generally ignored. He lived until 1955, but his work was not recognized by the ultimate accolade of a Nobel Prize. Had he lived a few more years, it is highly likely that it would have been. It would seem that the scientific community generally needed more time to adjust to the radically new paradigm he had created. It was not completely ignored, however; and the wonderful Henry Dale, whom I introduced in the Introduction, was then President of the Royal Society and had in 1945 proposed Avery for the Copley Medal, “the highest scientific distinction that the Royal Society has to bestow.”* Since Avery was reluctant to travel, Dale had made the long trip from London to the Rockefeller Institute (remember, this was 1945!) specifically to bring him the Medal, an undertaking of such an arduous trip that was itself a remarkable tribute. It is also possible that, as René Dubos recalls, the Nobel Committee was not convinced that Avery had realized the full importance of what he had done. Ironically, Avery had been nominated for a Nobel Prize in the late 1930s in recognition for his work in immunochemistry. Indeed, his Copley Medal citation was, “for his success in introducing chemical methods in the study of immunity against infective diseases.” His 1944 discovery should therefore have consolidated his suitability for the Prize. However, the super-cautious and honest Avery himself had acknowledged that he could not rule out the possibility that his results might have been affected by minute amounts of a contaminant. As Dubos says (Dubos 1976, p. 159): The Nobel Committee, probably not accustomed to such restraint and selfcriticism bordering on the neurotic, “found it desirable to wait until more became known about the mechanism involved in the transformation.”

Furthermore, Dubos comments that neither Fred Griffith nor Avery was a person who makes himself obvious to international committees. In addition, * Previous winners include Benjamin Franklin in 1753, Humphry Davy in 1805, Charles Darwin in 1864, Albert Einstein in 1925, and Niels Bohr in 1938.

88

OSWALD T. AVERY

Avery was famous for understatements: thus the conclusion section of his momentous 1944 paper consists of a single flat, deadpan sentence, and nothing else: The evidence presented supports the belief that a nucleic acid of the desoxyribose type is the fundamental unit of the transforming principle of pneumococcus type III. [Author’s note: Type III here is the type S in the group’s earlier work.]

Low-key was Avery’s style and, sadly, it seems to have cost him the Nobel Prize. However, Dubos recalls that Avery was not in the least upset by this Nobelian myopia—they were later to apologize for their omission. He knew full well the significance of what he had done, and that realization was enough for him. Nowadays, the chances of getting funded depend on a scientist’s ability to positively project a potential for achieving socioeconomic benefits. But the scientist must also be working in a well-defined and relevant field so that applications may be judged by peer preview. As the Canadian biologist Robin Arthur Woods has pointed out, the events that led to the discovery that DNA was the genetic material took place outside the mainstreams of genetic investigation (Woods 1980, p. 16). The organisms involved were various strains of pathogenic bacteria, and so the experimenters duly reported their findings in medical journals, and geneticists probably would have been unaware of them. Finding suitable peers might therefore have been problematic. What are the chances under today’s rules that an introverted, modest scientist such as Avery or Griffith would win support to tackle one of the similarly profound problems that still abound?

6 Barbara McClintock (1902–1992): A Patient, Integrating, Maverick Interpreter of Living Systems

The road to success has always been strewn with traps for the unwary, but some 50% of the population must deal with a hazard they cannot avoid: Until the early twentieth century, women were generally supposed (at least among men!) to be unsuited for scientific careers. The conspicuous successes of Marie Curie, née Sklodowska, who won two Nobel Prizes, no less, had not dented a belief of the time in the “natural inferiority” of women that had prevailed for a long time. Indeed, some members of the French Académie des Sciences (Marie was born in Warsaw but did most of her work in Paris) tried to exclude her from her first Nobel Prize by failing to mention her enormous contributions in their written nomination for her husband, Pierre Curie. They would probably have succeeded without the intervention of a Swedish mathematician, the rather aristocratic Gustav Mittag-Leffler (1846–1927), a highly influential member of the Swedish Academy of Sciences, and a staunch advocate of women in science (Quinn 1995, p. 188). What a pity he was no longer around to prevent the gross injustice actually done to Lise Meitner (1878–1968) over the 1944 Nobel Prize for the discovery of nuclear fission awarded to her longterm collaborator in Berlin and close friend, Otto Hahn (1879–1968).* As technically speaking she was Jewish (she had actually been brought up a * Its announcement was delayed by the Royal Swedish Academy of Science until November, 1945, and the presentation until 1946.

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

89

90

BARBARA McCLINTOCK

Protestant), she had fled in 1938 from the Nazi threat of imminent arrest and, worse, at little more than an hour’s notice “with 10 marks in her purse,” but with a diamond ring given to her by Hahn, who advised her that she could sell it in an emergency. She eventually arrived safely in Stockholm via Holland and Denmark where she had stayed with Niels Bohr for a few weeks. Hahn had written to Meitner shortly afterward about his latest baffling results that defied conventional wisdom. Meitner and her nephew Otto Frisch (1904– 1979)–another refugee from Germany—correctly and brilliantly interpreted them as nuclear fission; and indeed they were the first to use that seminal term. Albert Einstein said later that the resultant Meitner and Frisch (1939) paper was a crucial step toward the bomb, and he thought it was even more important than the work of Hahn himself. No wonder Einstein nicknamed Meitner “the German Madame Curie.” Nevertheless, despite his and Bohr’s efforts to get Meitner included, Hahn was awarded an unshared* Nobel Prize for Chemistry in 1944 for this discovery. Bearing in mind Einstein’s and Bohr’s unimpeachable reputations, there seems little doubt that some of those involved in this disgraceful decision had agenda that had nothing to do with science. Oswald Avery suffered a similar injustice, of course. Clearly, therefore, even science, the most absolutely based of all human activities, is not immune to Machiavellian manipulation. Barbara McClintock had a relatively easy entry into the profession. A daughter of a physician, she had a very supportive family; but her mother especially needed to be convinced that enrolling at a university would be a better option than cultivating her marriage prospects. Common sense and McClintock’s determination prevailed however. She began studying botany at Cornell University in 1919 and went on to study for a PhD in 1927. She chose to study the cytogenetics of the maize plant–Zea mays—for her PhD thesis; and as all young scientists starting out should beware, that early-stage choice defined the rest of her career. She was highly successful from the very beginning. While still a graduate student, and taking full advantage of her consummate expertise with the microscope, she had discovered (with Lowell Randolph, a faculty member to whom she was appointed as an assistant) the first triploid to be observed in maize.† Unfortunately, their collaboration quickly failed as they did not get along, an essential albeit unscientific requirement for any collaboration; but their work together led nevertheless to the creation of an important new discipline: maize cytogenetics. Soon after, she developed the sensitive staining techniques that allowed her to identify for the first time maize’s 10 chromosomes and also to refine an ability to distinguish between * The Statutes of the Nobel Foundation allow the Prize to be awarded to a maximum of three persons in each category. † A plant with 3 copies of every chromosome rather than the usual 2. In humans, trisomy usually results in the death of the embryo; but, if it occurs on chromosome 21, it causes Down’s syndrome.

BARBARA McCLINTOCK

91

them, major discoveries that led to publication in the prestigious journal, Science, all while she was still in her twenties. Naturally, Cornell wanted her to stay on and appointed her an instructor. Her work on genetics, which began in the 1920s, increasingly challenged the conventional assumption that an organism’s genome was an unchanging set of instructions passed between generations. She focused on studying an organism’s response to threatening challenges, how it sensed those challenges, and how its genome responded to mitigate them. She eventually discovered that these responses were controlled by transposable elements within the genome that control gene expression or repression in precise ways. Her analysis of the ways these elements operate won her the Nobel Prize for Physiology or Medicine in 1983. When she started her career, it was already well established that chromosomes were made up of genes arranged like “beads on a string,” and that the genome–an organism’s complete book of blueprints for controlling replication, division, and every aspect of metabolism—was made up of sets of chromosomes. But genes were thought to be indivisible, “atoms of heredity,” that coded for specific actions independently of each other. Chromosomes, the chapters or sections of an organism’s book, were similarly thought to be invariant with time for any given species, except that is for mutations—changes introduced by accident or otherwise. The very word “chromosome” indicates some of the history of genetics. Derived from the Greek words for “color of the body,” it refers to the color it takes up when stained with a dye. The experimenter’s challenge is to choose a dye that allows a specific chromosome to be identified and studied under the microscope. McClintock excelled at these skills. In particular, she came to realize the need to choose the precise stage– pachytene—in the maize sexual reproduction cycle (meiosis) when observations should be made. Choose the wrong stage–metaphase, say—and it becomes difficult to observe the relative positions of the centromeres and, hence, the arm lengths in each chromatid pair, information that further helps to identify positively each of an organism’s chromosomes (see Figure 10). She also reveled in theses skills, perhaps arrogantly, which may have been a contributing factor to the failure of the collaboration with her “boss” Lowell Randolph, who apparently was not as technically accomplished. She did not suffer fools whatever the consequences. Her early career was full of disappointment. As Nina Fedoroff (1996), a distinguished modern scientist who in many ways has inherited McClintock’s magnificent scientific mantle and who also works on maize genetics, writes: In 1933, McClintock received a Guggenheim Fellowship to go to Germany.* She was utterly unprepared for what she encountered in pre-war Germany, and she * Comfort (2003), p. 61. Comfort reported that she would not discuss her dreadful experience of Nazi Germany, other than to comment, “Nobody smiled.”

92

BARBARA McCLINTOCK

3

0.2 – 20µm

2

4

1

Figure 10 A chromosome portrayed during cell division. The blue lines represent coiled DNA. 1 indicates a chromatid, one of the two identical parts of the chromosome. 2 indicates the centromere, the point of intersection of the chromatids. 3 indicates the short arm of the chromatid. 4 indicates the long arm of the chromatid. (Source: Wikipedia.) returned to Cornell before the year had elapsed. Her prospects were dismal. She had completed graduate school seven years earlier and had already attained international recognition, but, as a woman, she had little hope of securing a permanent academic position at a major research university. Her extraordinary talents and accomplishments were widely appreciated, but she was also seen as “difficult” by many of her colleagues—in large part because of her quick mind and intolerance of second-rate work and thinking. Finally, in 1936, Lewis Stadler secured her a position at the University of Missouri, where she started working on what happens to chromosomes broken by X-irradiation. Although her reputation continued to grow, her position at Missouri was tenuous—she was definitely a troublemaker, reprimanded regularly for such stunts as climbing walls to get into the laboratory so she could keep on working at night.

Nevertheless, her career did progress, which is a tribute to the high levels of tolerance shown to rebellious youth until not so long ago. Students and indeed all young people should as far as possible bring an air of cavalier irreverence to their work, of course. Their purpose should be to question all authority rather than to worship it, even if on occasion that questioning might descend into ridicule. As the Anglo-Irish satirist Jonathan Swift (1667–1745)

BARBARA McCLINTOCK

93

elegantly showed long ago in his masterly Gulliver’s Travels, for example, ridicule may sometimes be the only available defense against excessive bureaucracy if all reasoned argument has failed. Sadly, however, some universities today tend not to be amused by such behavior, at least in the UK, and might indicate their displeasure by slapping a “bringing-the-university-intodisrepute” charge onto those they deem to have stepped out of line. Such charges can often bring serious consequences such as dismissal when less pompous organizations would respond with a rebuke and a rap over the knuckles. In 1941, her long-term friend and collaborator Marcus Rhoades and a senior colleague Milislav Demerec (1895–1966) arranged for her to meet Vannevar Bush in Washington. They must have been very well connected. Bush was then on the threshold of becoming the overall director of the Office of Scientific Research and Development, the wartime agency responsible for coordinating the entire US scientific effort for military purposes, reporting directly to President Roosevelt. The meeting changed her life. Putting aside for the moment the awesome responsibilities of his new job, the prescient Bush saw her potential immediately, and just as importantly they hit it off. As he was also President of the Carnegie Institute in Washington, he offered her an appointment at the Institute’s Cold Spring Harbor Laboratory which was shortly to be directed by Demerec.* She stayed there for the rest of her life. Speaking in 1980, long after she had retired, she said (Comfort 2003, p. 66): That’s what the Carnegie’s for, to pick out the best people they can find and give them freedom.

Universities everywhere today certainly do their utmost to appoint the best people they can find. However, even though it is well known that new appointees are often inspired and full of get-up-and-go, today’s policies mean that basic salary apart, a publicly funded university (and many private ones, too, since they tend to have similar policies) usually has no choice but in effect to condemn their ambitious recruits to making endless applications to external agencies for research funds, with each application on average having only about a 25% probability of being funded and rarely supporting a specific project for more than 3–5 years. Jonathan Swift’s successors were never more urgently required!

* In 1901, Andrew Carnegie transferred $10 million in US Steel Corporation bonds to establish the Carnegie Institution of Washington. The institution’s aims were to strengthen American scientific research and education and to discover the “exceptional man,” wherever he might be found. Scientific staff would have as much academic freedom as they wished, would have no teaching duties, and would have negligible administrative obligations. It was renamed the Carnegie Institution for Science in 2007. Its resources are devoted, “to ‘exceptional’ individuals so that they can explore the most intriguing scientific questions in an atmosphere of complete freedom.”

94

BARBARA McCLINTOCK

Carnegie indeed turned out to be the perfect choice and suited her temperament. Nina Fedoroff (1994) says in her memoir of McClintock published in 1994: McClintock’s dislike of making commitments was a given: she always wanted to be free—free to do exactly what she wanted to do, when she wanted to do it. Indeed, she insisted that she would never have become a scientist in today’s world of grants, because she could not have committed herself to a written research plan. It was the unexpected that fascinated her and she was always ready to pursue an observation that didn’t fit.

Carnegie protected her independence and had more than the modest amount of land she required to cultivate her maize. The land also had the patience to tolerate the inevitably slow pace of her work: her experiments usually required a growing season, which is usually about a year. Contrast that with geneticists who use the fruit fly Drosophila melanogaster as a model system, which has a generation time of less than 2 weeks. However, she preferred her beloved maize, which she thought had many more potentially recognizable and manipulatable characteristics to compensate for the slower experimental pace. As Comfort relates (Comfort 2003, p. 70), her experiments: . . . were of two basic types. She did breeding experiments, either self-fertilizing the plant or crossing it with other stocks whose genetic constitution she knew, to determine whether the gene or genes involved behaved in the way she predicted. She also examined the chromosomes under the microscope, looking for breaks, inversions, rings she thought underlay the anomaly of pattern. Cytogenetics is in essence the cellular basis of heritable patterns.

McClintock was lucky too that as she arrived at Cold Spring Harbor, Demerec, with whom she had briefly been friendly while they were students at Cornell, had been appointed director of both the Biological Laboratory and the Genetics Department (of the Carnegie Institute); and he was determined to make his new domain into a world-class center for research and teaching in microbial genetics. Furthermore, he did not exclude his own research from that strategy and used his bacterial genetics experience to increase the yields of penicillin production for the US Army by orders of magnitude, no doubt saving countless lives. Max Delbrück (1906–1981), the German theoretical physicist, spent a postdoctoral fellowship working in Denmark where Niels Bohr inspired him to take an interest in biology. He subsequently moved back to Berlin as an assistant to Lise Meitner and also so that he could be close to the Kaiser Wilhelm Institute, which specialized in biology. The rise of Nazi-party influence encouraged a small group of physicists and biologists to meet in Berlin to discuss less politically motivated problems and to plan moves elsewhere. Delbrück got a

BARBARA McCLINTOCK

95

Rockefeller Fellowship and went to Caltech until September, 1939, and then to Vanderbilt University on an instructorship where he started to collaborate with Salvador Luria (1912–1991), an Italian biologist then working at Columbia University. They were both technically speaking enemy aliens, free from the draft, of course, and free therefore to concentrate on science. They were also attracted by Demerec’s vision for Cold Spring Harbor. Delbrück was an expert in bacteriophages, viruses that attack bacteria and which are usually called “phages.”* For Delbrück, phages were the hydrogen atoms of biology, as they seemed to be the simplest organisms capable of replication, albeit with a little help from their bacterial host. Inspired to begin a series of rigorous highly mathematical 2-week summer courses, the lab was soon filled every summer with increasing numbers of eager, bright, irrepressibly outrageous, young physicists—the phage group. This, of course, is precisely what any academic institution needs, although they do not need to be physicists. However, being physicists, they were obviously determined to impose what they saw as their absolute authority on genetics and to discover new physical laws that would explain the fundamentals of life—work that later helped create the new discipline of molecular biology. They were very successful in all of this; some would say that perhaps they were too successful as thereafter the so-called “reductionist” approach to genetics, and indeed to biology in general, came to dominate research in the field to the virtual exclusion of all others. I will return to this problem in Chapter 8. The phage group’s focus was the gene—how it could be mutated and how genes control an organism. Using phages, they usually could do an experiment in less than a day, whereas McClintock would usually need a year. However, she was an integrator as she put it, and therefore interested in collating, interpreting, and incorporating the various responses of the whole organism into a coherent picture, the very opposite of the reductionist approach. In contrast with the phage group, she was interested in how an organism controlled its genes to allow it to respond adequately to external threats (drought, disease, heat, etc.). Thus, at a time when those not actively involved in war work could have been isolated, Carnegie became a vibrant summer-center where creative tension flourished. There also seemed to be a huge amount of mutual respect despite their differing approaches and temperaments. See Figure 11 for a photo taken in 1947. However, her plans were not revolutionary. She did not set out to discover mobile genetic elements as initially she had no suspicion they existed, but she planned to extend her broken chromosome studies and to discover new maize genes. She thought that to do good science meant that one had to “listen to one’s material, let it guide one’s interpretation.” * Delbrück later shared the 1969 Nobel Prize for Physiology and Medicine with Luria and Alfred Hershey for their discoveries concerning the replication mechanism and the genetic structure of viruses.

96

BARBARA McCLINTOCK

Figure 11 Photograph of Barbara McClintock working at the Cold Spring Harbor Laboratory taken in 1947. (Source: U.S. National Library of Medicine.)

Thus, she designed experiments not so much to prove a point but to measure her organism’s response—maize in her case—to whatever challenge she induced, painstakingly integrating her results to yield as coherent a picture as possible. She used traditional plant breeding techniques and selected for specific traits such as variegation using techniques that were little different from those used by Gregor Mendel almost a century before. But she extended them into the chromosome domain and tried to identify the genes responsible for those specific traits. She also discovered a new way of using mutations. They were usually induced by irradiating the sample (for example, seeds) with X-rays, but this scattergun approach blasts the whole organism and offers no way of inducing mutations at specific sites in the chromosome. She needed a reproducible and reliable way of solving that problem. As Fedoroff (1994) explains: McClintock developed a method for using broken chromosomes to generate new mutations. Among the progeny of plants that had received a broken chromosome from each parent, she observed unstable mutations at an unexpectedly high frequency, as well as a unique mutation that defined a regular site of chromosome breakage. These observations so intrigued her that she began an intensive investigation of the chromosome-breaking locus. Within several years she had learned enough to reach the conclusion, published in 1948, that the chromosome-breaking locus did something hitherto unknown for any genetic locus: it moved from one chromosomal location to another, a phenomenon she called transposition.

BARBARA McCLINTOCK

97

Fedoroff (1996) continues her narrative: She’d never seen anything like a moving gene, and she worked hard in the ensuing few years to prove that this funny gene really did move. In 1948 she made the first announcement in the literature that there were transposable genetic elements. To geneticists of the time, this was roughly equivalent to claiming that the kitchen could occasionally jump into the attic in the middle of the night. By the time of the 1951 Cold Spring Harbor Symposium, she had plenty of evidence and presented a paper describing her work. As Barbara told it, the reaction to her presentation ranged from perplexed to hostile. Later, she published several papers, and from the paucity of reprint requests, she inferred an equally cool reaction on the part of the larger biological community to the astonishing news that genes could move. After that, McClintock wrote up her results as if for publication and filed them, publishing little more than concise summaries of her results in the annual Year Book of the Carnegie Institution of Washington.

Indeed, McClintock’s work was greeted by disdain or hostility for many years. In 1953, she decided to stop publishing in the scientific literature, explaining later (1973):* With the literature filled to the exhaustion of all of us, I decided that it was useless to add weight to the biologist’s wastebasket.

In January 1950, she wrote to Charles Burnham:† You can see why I have not dared publish an account of this story. There is so much that is completely new and the implications are so suggestive of an altered concept of gene mutation that I have not wanted to make any statements until the evidence was conclusive enough to make me confident of the validity of the concepts.

It is ironic that the scientific literature has increased between 10- and 100-fold since McClintock began her program of self-denial (Larson and von Ins 2010), but very few researchers nowadays can afford to stop struggling to add to it without risking serious damage to their future funding and employment prospects. The slogan, “publish or perish” was coined decades ago, but it has never been truer than today. Furthermore, publication alone is not enough. Journals themselves are rated according to their perceived “impact factors,” and one also risks severe sanctions unless a proportion of one’s publications has been blessed by supposedly heavyweight journals such as Nature or Science. * Letter to John Fincham, a biologist at the University of Leeds, May 16, 1973. † Quoted in Profiles in Science, “The Barbara McClintock papers,” National Library of Medicine.

98

BARBARA McCLINTOCK

Thus, experiments beginning with broken chromosomes in the early 1940s led her to observe genetic instabilities that had not been seen before. Instead of the familiar patterns of chromosome loss, the new instabilities showed regularities that told her that the organism was responding to challenges in a programmed manner that cushions the challenges’ effects. She subsequently spent no less than 3 decades documenting these instabilities and uncovering an unsuspected array of response systems. As she explains in her Nobel Lecture (1983): These proved to be the transposable elements that could regulate gene expressions in precise ways. Because of this I called them “controlling elements.” Their origins and their actions were a focus of my research for many years thereafter. It is their origin that is important for this discussion, and it is extraordinary. I doubt if this could have been anticipated before the 1944 experiment. It had to be discovered accidently.

McClintock’s transformative discovery is now at the very heart of today’s research in genetic engineering and the vast profitable industry that research has spawned. Research policies today, however, rarely cater for serendipity or to such determined and courageous loners as McClintock. For many years she was the sole pioneer of mobile genetic elements, and it is difficult to imagine that the consensus among her peers, if any could be found in the 1940s or 1950s, would have been that her work should be given high priority in terms of the benefits it might bring as funding agencies almost universally demand today. In addition, McClintock had found a home—the Carnegie Institute’s Cold Harbor Laboratory—that tolerated her idiosyncrasies and gave her complete and unqualified intellectual freedom for many years simply because they believed in her. She was fully appreciative of that fact. In 1954, she was invited to return to the University of Missouri to a tenured position. She replied (Kass 2005): My present situation with the Carnegie is unique. . . . I feel it would be difficult to acquire anywhere else the degrees of freedom that this position offers. The new President will continue the policy of no interference and complete freedom. I just go my own pace here with no obligations other than that which my conscience dictates. This seems to fit my personality rather well.

Why have we—that is, our proxies in such bodies as government and funding agencies—abandoned policies based on trust, especially when the overwhelming evidence is that they can be very successful? McClintock was doubly lucky. She was free from the fetters usually imposed by today’s myopic funding agencies, even on those whose funding requirements are modest as they often are for truly original basic research; and she had found a home that backed her unquestionably. We must do more to recreate the conditions that allow such unquestioned freedom. Precisely what we—you and I—can do is a moot question to which I shall return in Chapter 12.

7 Charles Townes: A Meticulously Careful Scientific Adventurer

In previous chapters I outlined how some of Nature’s unexpected and unpredicted properties came to be revealed. Until Max Planck came along, it was universally believed that energy, even on the minutest scale imaginable, streams continuously like water from a pipe. But Planck’s deliberations led him to conclude inexorably that it comes only in integral numbers of packages of a specific size. Thus, energy is quantized, a discovery that also marked the end of the “classical” physics era. Before Oswald Avery published his groundbreaking results, scientists assumed that the molecules that control every aspect of our biological lives were complex proteins, and that simple, boring, ubiquitous DNA was merely junk, old baggage left over, perhaps, from previous evolutionary eons. Barbara McClintock originally thought, in common with scientists generally, that our genetic makeup was written, in effect, on invariant tablets of stone (except sportive accidents) for every species for all time. However, I will now describe a discovery that, as its originator often pointed out, should not have come as a surprise and indeed could have been made decades before it was actually produced. Remarkably, its arrival was catalyzed only after scientists had been dragooned into wartime service and stimulated into devising imaginative ways of using their expertise. Their motivation was to increase our chances of surviving a total and brutal war, but it also led eventually to a discovery that might be one of the most valuable goods an ill wind has ever blown. Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

99

100

CHARLES TOWNES

I mentioned some of Townes’ career in Scientific Freedom. I include him again because he is a prominent member of my Planck Club, and his experience is highly relevant to my present story. In 1951, Townes conceived the idea of the maser while working at Columbia University. The proof of its validity depended on his ability to generate “avalanches” of microwave-frequency photons when ammonia molecules, raised to excited quantum states, are stimulated into dumping their energy simultaneously. But he could not get his molecules to behave as he expected. Even worse, the two most influential people in his department tried to kill his apparently unsuccessful project. As he later relates (Townes 1999, p. 65): [After] we had been at it for two years, Rabi and Kusch, the former and current chairman of the department—both of them Nobel laureates for work with atomic and molecular beams, and both with a lot of weight behind their opinions—came into my office and sat down. They were worried. Their research depended on support from the same source as did mine. “Look,” they said, “you should stop the work you are doing. It isn’t going to work. You know it’s not going to work. We know it’s not going to work. You’re wasting money. Just stop!”

Townes had recently been given tenure at Columbia, so he knew he could not be fired for incompetence or ordered around. Nevertheless, he must have been daunted by the awesome weight of Isidor Rabi’s and Polykarp Kusch’s Olympian reputations in this field, in which he was now dabbling with no experience whatever! Furthermore, Rabi was also his patron as he had invited him to come to Colombia only a few years before. Showing extraordinary courage, therefore, Townes, still a junior faculty member, of course, stood his ground and respectfully told his exalted colleagues that he would continue. Two months later in April, 1954, his experiment worked, and the maser was born. Three years after that Arthur Schawlow, Townes’ postdoc at Columbia and his brother-in-law, moved to the Bell Laboratories; and their collaboration led to the optical-frequency version of the maser—the laser. Townes was awarded the Nobel Prize for Physics in 1964 for these discoveries (shared with Alexander Prokhorov and Nikolai Basov [USSR] who developed the maser independently). Schawlow was awarded the Nobel Prize for Physics in 1981 for his work on laser spectroscopy. Born in 1915 in Greenville, South Carolina, Townes excelled at school. When he was 16, he moved to Furman University, a small, Baptist, ivy-covered liberal arts college nestled in his hometown. Townes is a devout Baptist who can trace his ancestry back to the Mayflower colonists of Massachusetts. He chose Furman because his father and older brother had been educated there (it was an all-male college in those days) and was only just over a mile from his home so the careful Townes could save money on accommodation. Three years later he earned a degree in physics but as his parents thought that 19 was too young an age to start an independent career he stayed on for another

CHARLES TOWNES

101

year to earn a degree in modern languages. Furman faculty did little research themselves and the university was not strong in physics, so Townes had to supplement his training by private study. He learned about electromagnetic theory from an article in Encyclopaedia Britannica written by the master himself—James Clerk Maxwell. His nuclear physics background came from technical journals freely placed in his local public library by AT&T’s Bell Telephone Laboratories—the Bell Labs. He also read about Karl Jansky’s discovery in 1932 of extraterrestrial radio waves in those wonderful journals, an introduction that was to play a large part in his later life.* Bell Labs were famous for their public spiritedness, but many other large companies such as BP, GE, and IBM also once had similarly expansive and altruistic programs. The industrial scene has been since transformed, of course, and globalization has spawned many new entrants. Philosophies have changed too. How many top companies today commit resources to keeping the public up-to-date on their latest basic exploratory research (assuming that they still have such programs) independently of any prospects of a return? After earning a master’s degree in physics at Duke University, the ambitious young Townes realized that he yearned to learn at the feet of the physics’ greats and used his savings to take himself off in 1936 to the illustrious Caltech—the California Institute of Technology. Robert Millikan was there, the first to measure the electrical charge on the electron and for which he was awarded the 1923 Nobel Prize for Physics, as was Linus Pauling who went on to win two Nobels. J. Robert Oppenheimer and Albert Einstein were frequent visitors, so the young Townes would not have lacked for inspiration. His PhD thesis on the separation of the stable isotopes of carbon, nitrogen, and oxygen, and on nuclear spin measurements of some of their rare isotopes, took him 3 years and was awarded in 1939. The US might have turned the corner on the Great Depression by that fateful year, but physics jobs were scarce. Many of his contemporaries had been obliged to take jobs as schoolteachers or oil-field seismology technicians (a common joke then was that PhD stood for post-hole digger) but Townes was determined to stay with fundamental physics. When an unexpected and generous offer arrived from the Bell Labs (then located in Greenwich Village, New York) he prevaricated as he was not too keen to make money or products for a commercial company, but his professor Ira Bowen persuaded him to reconsider. It was good advice. Townes took the job and soon found that he had joined an exceptionally well-equipped lab and could make his interests as fundamental as he liked. His employer even started him on a series of jobs that introduced him to a range of problems from which he could eventually * Karl Jansky (1905–1950) discovered that radiation apparently emanated from the center of the Milky Way galaxy. Unfortunately, it was not an interest that Bell Labs could sustain in the severe depression of the thirties, and Jansky was obliged to move on to other things. He tried to resume the work after the war but, sadly, died in 1950 from a chronic kidney condition at 44 years of age.

102

CHARLES TOWNES

take his pick, and the intellectual environment was as stimulating as any he had experienced. However, these were very dark days, of course, and his sheltered existence came to an abrupt and startling end in February of 1941 when he was summoned to the office of the Lab’s research director, Mervin Kelly, and immediately told (Townes 1999, p. 37): On Monday, I want you to start designing a radar bombing system by adapting the technology used for anti-aircraft guns and working with the Lab’s radar people.

As he goes on to relate with typical honesty, he was not happy with this order mainly because he thought the work would be dull and uninteresting; and in any event, he doubted that helping to kill and destroy would inspire him. However, needs must, and he threw himself enthusiastically into his new job, an attitude that not surprisingly soon created its usual valuable spin-offs. He learned the disciplines required to work in a team and to work to deadlines. He expanded his engineering expertise considerably, particularly in electronics and microwave technology and, hence, his ability to transform ideas into successful projects. He also learned to deal with disappointment, as the military chose not to use any of the systems he helped to develop, all of which had been proven with real aircraft and real targets. Nor, it seems did they listen to advice. After he and his colleagues had progressively been pushed to develop radars at ever-lower wavelengths for their radar, they arrived at 1.25 centimeters, a wavelength that would give better directivity than earlier versions and would more easily fit into aircraft. However, Townes’ careful studies had revealed that water vapor strongly absorbed radiation of this wavelength; and, as the war was then increasingly moving to the Pacific Theater where conditions are almost always humid, he did his best to warn the military that this problem could prove a big drawback. However, snafu always rules in wartime, of course. His objections were ignored, 1.25 centimeter radars were introduced anyway, and not surprisingly they did not work in humid conditions. Their effective range turned out to be little more than a couple of miles at which distance low-tech binoculars are at least as good! The system was junked, therefore, but Townes was later able to pick up a cheap and plentiful supply of parts and equipment that proved invaluable to his postwar research. When the war finally ended, Townes, inspired by Jansky’s prewar discovery, thought about taking up radio astronomy. However, he was warned against it; the opinion among senior scientists in the US was that radio waves would not reveal much about astronomy as they were not directional and the wavelengths were too long. As Townes presciently points out (1999, p. 43): People in well-developed fields tend to be conservative particularly with regard to ideas from outsiders. As experts, they feel they know the field well and do not

CHARLES TOWNES

103

much care for interlopers. In addition, their views of ideas or technologies behind new proposals outside their own fields of expertise are sometimes rather limited.

Thus, the US did not take part in the early development of this important field, which was left mainly to England and Australia, and their wartime radar researchers. It was a big mistake, of course, as was shortly to be revealed. The USSR launched the world’s first artificial satellite, Sputnik 1, in 1957; and the only instrument that could track it was a radio telescope built in England by Bernard Lovell (1913–2012), another wartime radar pioneer. That instrument, now called the Lovell Telescope, was also used to track and receive data from many other space vehicles, notably the US Pioneer 5 launched in 1960, the first probe to explore interplanetary space. However, the correct conclusion from decisions such as these is not the obvious one (that the policymakers had got it wrong) but that they make bigger mistakes if they assume that accurate predictions of future performance or potential can ever be made in any circumstances. They cannot: real world complexities are too great to allow their infinite uncertainties to be managed. Nevertheless, foresight based on expert opinion is still considered the best option everywhere today despite its many examples of failure. Even the great Vannevar Bush could get it wrong. In 1961, when Townes became Provost at the Massachusetts Institute of Technology (MIT), he became involved with the Apollo moon-landing project and was invited to chair the Apollo Science and Technology Advisory Committee (STAC). Bush, who was still a VIP, (he was chairman of the MIT Board of Trustees) disliked the National Aeronautics and Space Administration (NASA) as he thought the space program was a waste and thought Apollo would fail. Townes relates that Bush said:* Look, Charlie, that’s crazy. You shouldn’t be involved with that. You’re wasting your time. It’s going to take a lot longer and a lot more money than they say. And it isn’t going to work.

Where had he heard that before! As chairman of STAC, Townes would want to help the Apollo project as much as possible, of course; but, as it turned out, that also created problems for his future career. In 1966, MIT was searching for a new President, and unfortunately for Townes, Bush was asked to chair the selection committee. Thinking perhaps that Townes would increase MIT’s links with NASA if he were President, Bush did not appoint him even though it had been generally assumed among other colleagues that the post would be his. Shortly afterward Townes left MIT for Berkeley. Bush relented after Apollo’s momentous success, and later congratulated him. But the damage had been done.

* Interview with Charles Townes first published in Laser Community, Fall 2010.

104

CHARLES TOWNES

However, to return to Townes’ postwar dilemma, he decided to stick to his wartime knitting and return to microwave spectroscopy, a decision that Bell Labs also happily endorsed. Even during the war, Townes had reflected on the water-vapor problem, the simple fact that molecules strongly absorb radiation at these long wavelengths and how he might use this property as a powerful research tool for examining molecular and nuclear structure. Scientists had already studied molecular vibrations and rotations excited by visible and infrared radiation, but these frequencies were too high to allow accurate measurement. Microwaves, on the other hand, promised to be more sensitive probes; even better, their use was generally unexplored and he was an expert. The ammonia molecule NH3, for example, places its 3 hydrogen atoms at the vertices of a triangle, thereby defining a plane and allowing its lone nitrogen atom to vibrate above and below this plane as if it were on a three-legged trampoline. Careful studies of the microwave spectra emitted by this and other excited molecular species can reveal such new physics as nuclear spins and chemical bond lengths. He soon established a reputation in this new field and showed that some previously accepted measurements using optical spectroscopy were wrong (the nuclear spins of Cl35, Cl37, and B10, for example). However, Bell Labs were not convinced that this work had value for them. His boss told him (Buderi 1996, p. 344): You’ve made a lot of people annoyed because you are talking about what you would like to do. You ought to be talking about what is good for the company.

This was not the wisest remark he could have made to the independentminded Townes as it merely increased his yearning for a return to university life. Not surprisingly, therefore, when Isidor Rabi,* chairman of the Columbia University physics department, who was impressed no doubt by his growing reputation, invited Townes to come to Columbia, he leapt at this completely unexpected opportunity and started there in 1948. The U.S. Navy also was interested in high-frequency communications and controlled radars and asked him to advise on how shorter frequencies than they had used during wartime might be generated. They agreed to provide financing to Columbia for this

* Rabi (1898–1988) won the 1944 Nobel Prize for Physics for his discovery of nuclear magnetic resonance. He is another member of my Planck Club, of course, and his discovery changed the world. Ed Purcell and Felix Bloch shared the 1952 Nobel Prize in Physics for their discovery of precision methods for studying the composition of different materials using the technique; and, in 2003, Paul Lauterbur and Peter Mansfield won the Physiology and Medicine Nobel Prize for their 30-year-old work that made possible fast, high-resolution, medical imaging. Virtually every hospital or medical center everywhere in the world today has at least one substantial piece of equipment inspired by these discoveries, but now under the politically correct name of magnetic resonance imaging (the ubiquitous MRI machines).

CHARLES TOWNES

105

effort with few if any strings and also invited Townes to chair a national selection committee to advise on additional work in this field. The ideas behind the maser’s discovery had already begun to form in Townes’ mind before he went to Columbia. It had been known for some years that if a molecule in an excited state were bombarded with microwave radiation (photons) whose energy (frequency) was exactly the same as that state’s energy, the molecule would be stimulated into emitting a photon exactly in phase with the bombarding photon and in exactly the same direction; that is, it would be coherent with it. Thus, there would now be two photons, and each could proceed to stimulate further coherent emissions from other excited molecules, thereby eventually creating an avalanche of amplified and coherent radiation. These are necessary, even amazing, consequences of the quantum nature of these states and were first pointed out by Einstein in 1917 (Einstein 1917). However, if the molecule were in its ground state (i.e., it was not excited), an incoming photon would probably disappear by being absorbed and raising the molecule to an excited state. The molecule remains in that state for a time determined by its quantum properties, eventually spontaneously decaying to its ground state by emitting a photon of that same energy. However, that later emission would have no relationship with the original photon; that emission could happen at any time and go in any direction. Thus, the excited molecule “forgets” how it got excited, and the emitted radiation would be incoherent with the original. Einstein would seem, therefore, to have come up with the idea for the maser! And so he had, but Einstein was exclusively a theoretician; and, in any case, the little matter of reality would have stood in the way. As everyone also knew, the probability that a molecule would be in an excited state varies inversely and exponentially with the energy of that state. This is the famous Maxwell-Boltzmann energy distribution law, according to which the overwhelming majority of gaseous molecules at ambient temperature is in the ground state and would therefore be efficient absorbers of radiation. Subsequently, that radiation would spontaneously be re-emitted, of course, but it would be incoherent. As Townes put it, “The material soaks up more photons than it surrenders.” Thus, according to then-current understanding, amplified coherent emissions would be fundamentally impossible to achieve in practice. Thermodynamics would not allow it. Objections like these were possibly in Rabi’s and Kusch’s minds when they tried to get Townes to stop his experiment and stop wasting the department’s Navy money, and those objections were possibly also the reasons why no one had previously tried to realize Einstein’s extraordinary theoretical vision. At Columbia, Townes had the usual wide range of academic duties such as teaching, supervision (he had a dozen PhD students) and the inevitable committees, so the maser project did not happen immediately. The idea was never far from his mind, however; and Townes relates how before dawn one morning

106

CHARLES TOWNES

in April 1951 while on a trip to Washington to attend a Navy meeting, he had gone out to sit quietly on a park bench in the deserted Franklin Park to muse on the problem and its possible resolution. Suddenly, he had a “Eureka!” moment. Einstein’s paper applied to systems in thermal equilibrium, while thermodynamics, as encapsulated in the Maxwell-Boltzmann law, requires that there are far fewer molecules in excited states than in ground states. But Townes did not have to be bound by that restriction. His could do what he liked. His system need not be in thermal equilibrium. He could arrange to populate a cavity, say, mainly with excited molecules. This would mean, of course, that he would have to get rid of the unexcited ones and thereby force the creation of a molecular gas with an “inverted” energy population far from equilibrium. His thoughts immediately turned to a recent seminar at Columbia given by Wolfgang Paul, a German physicist, in which he had described a new system of focusing molecular beams using four electrically charged rods rather than the usual pair of flat plates. This new device (an electrostatic quadrupole lens) should enable him to focus a beam of excited molecules into a cavity; the unexcited molecules, having different electrical properties, should be deflected away. Jubilantly, he returned to his hotel where he had left his sleeping roommate–his brother-in-law, Arthur Schawlow—to share his new insight. All he had to do now was to make the idea work. But which molecules and which wavelengths should he use? His “friend” ammonia was an obvious choice, especially as the molecule coincidentally had an excited state at 1.25 centimeters! This latter, of course, was an astonishing piece of luck as it meant that he could bring all his wartime experience and cheaply acquired equipment to bear. Eventually, he gave the project to two graduate students—Jim Gordon and Herb Zeigler—as their thesis projects; and 3 years and many setbacks later their determination was rewarded with success. They now needed a name for the new device. Greek and Latin were tried (Townes is a linguist, of course) but they decided to go for an acronym: microwave amplification by stimulated emission of radiation. The maser was born! But it was still not all plain sailing. Many highly qualified experts still doubted what they had actually done. One famous objection was that masers could not possibly have precisely defined frequencies (energies) as Townes and colleagues claimed because that would violate Heisenberg’s Uncertainty Principle. This ubiquitous principle, first enunciated by Werner Heisenberg (see Chapter 4), shows that the energy of a quantum system—a molecule, say— cannot be known to arbitrary precision at a specific time. Similar rules apply to information on a molecule’s momentum and position. Heisenberg pointed out that Planck’s famous constant “h” sets strict limits to that knowledge and defines an uncertainty arising from a fundamental and inescapable property of the universe—a principle that is unaffected by technological capability. Surprisingly, the great Niels Bohr was also among Townes’ doubters, but there were many others. In general, however, we should expect that radically new

CHARLES TOWNES

107

ideas will always have a rough passage at first. We should allow scientists time to adjust their thinking in these circumstances, but how many funding agencies today take account of the difficulties that will inevitably arise from this fundamental problem? In the case of the maser, it would seem that the unconvinced had failed to appreciate the full quantum mechanical significance of the phenomenon of stimulated emission. Each excited molecule, even if that molecule is the only one in existence, is stimulated by an incident photon with the correct energy into radiating a photon with exactly the same energy and phase as the original. The incident photon is not a probe measuring the excited molecule’s energy; it yields no further information but merely precipitates its emission if that molecule is in the excited state. Thus, the Uncertainty Principle does not apply. If there are many such molecules, the “clones” of the original incident photons may go on to stimulate each molecule into emission, of course, subsequently building up an avalanche of coherent photons, thereby creating the maser. It also means that each photon in the maser beam will have exactly the same energy, phase, and direction as the original photon whatever the specific design of the equipment or any other extraneous consideration. In an attempt to measure the purity of the maser beam, the team “beat” its radiation against that from the first. A pure audio signal emerged, thereby proving that the two pieces of equipment were producing beams of virtually unvarying and identical frequencies. The creation of the laser, the optical version of the maser, took some years as a new range of technical problems had to be surmounted—the frequencies involved are a thousand times higher, for example. Townes, together with his colleague and brother-in-law, Art Schawlow, published a joint theoretical paper in 1958 describing how the laser could be made to work. In 1960, Theodore Maiman (1927–2007), working at the Hughes Research Laboratories* in Malibu California probably produced the world’s first; but Gordon Gould, a former colleague of Townes, also has a claim. Townes won the 1964 Physics Nobel Prize (jointly with Basov and Prokhorov) “for fundamental work in the field of quantum electronics, which has led to the construction of oscillators and amplifiers based on the maser–laser principle.” Their ideas also eventually spawned an avalanche of technologies, the maser proving so inspirational to so many scientists that in 1960 the journal Physical Review Letters banned further correspondence on the maser’s uses as it was being overwhelmed with them! Amusingly, that journal rejected Maiman’s paper describing his first laser, perhaps because they thought it was yet another maser paper! Today, it would be difficult to find an academic, commercial, industrial, medical, or government location or any domestic environment such as homes,

* It is now called the HRL Laboratories.

108

CHARLES TOWNES

boats, and cars in the developed world that does not contain at least one device that uses lasers or the principles behind them. The maser–laser has proved to be one of the most ubiquitous and transformative ideas in history. Such technologies as communications (e.g., optical fibers and semiconductor lasers), manufacturing (e.g., laser welding), navigation (e.g., GPS systems), medical techniques (e.g., laser surgery), bar-code readers (e.g., supermarket checkouts), and consumer products (e.g., CD and DVD players, laser printers) have been revolutionized, as have been space exploration and military operations in general. Indeed, few if any products marketed today have not required lasers at some stage in their production. It is difficult to estimate its financial value to humanity over the past 50 years or so, but I estimate that it surely must be more than 10 trillion current dollars. Townes modestly says that the maser would have happened sooner or later if he had not done it. That is probably true. Indeed, Nikolai Basov and “Sasha” Prokhorov, with whom he became friends, had actually done so independently. However, one cannot accurately predict the future. Einstein correctly described the maser’s principles in 1917, but it clearly took more than 3 decades to implement. Brilliant ideas often look obvious and predictable with hindsight, even to their originators. In any case, the maser–laser discoveries were described in the West for more than 20 years as solutions looking for problems; and even though Townes was internationally well known and respected, many scientists were reluctant to believe the properties he claimed for them. How much longer might the transition to total acceptance and incorporation into a multitude of commercial markets have taken if their sole originators had been relatively unknown and based behind the “iron curtain?” The USSR’s militaryindustrial complex was well developed, but commercial initiatives there would have needed government endorsement; and its venture capital industry was not prominent.* However, neither Townes nor Basov and Prokhorov would have made their great discoveries if they had not been free to follow their well-tuned intuitions and challenge convention. Freedom is the universal key. Townes’ U.S. Navy funding played a crucial role in the maser’s creation. Townes says that the Navy never tried to direct him or his colleagues or influence them in any way and left them free to work at their own pace at what they thought was interesting and important. The key to the Navy decision would seem to have been that Townes, through demonstrating his exceptional research capabilities over many years, had thereby earned a license to “play” in whatever lab he chose for as long as he wished. However, whatever the Navy’s reasoning, there is no question that its policy with regard to its support * A report from the U.S. General Accounting Office, April, 1997, entitled “Cooperative Threat Reduction: Status of Defense Conversion Efforts in the Former Soviet Union,” commented: “When the Soviet Union dissolved, it left behind an enormous defense industrial complex consisting of 2,000 to 4,000 production enterprises—some of which were massive conglomerates—that employed 9–14 million workers.”

CHARLES TOWNES

109

for Townes was an excellent model for success, as virtually every one of the ∼500 members of the Planck Club and their subsequent successes, scientific as well as commercial, have also confirmed. Why is this lesson generally being ignored? This is not merely a rhetorical question: it is a plea to those who might be in a position to do something about it.

8 Carl Woese: A Staunch Advocate for Classical Biology

Molecular biologists “can read the notes in the score but they can’t hear the music.”

Choosing a research program can be one of the most difficult problems a scientist faces. This is particularly true at the beginning of a career or when one has reached some sort of watershed. Traditionally, the issues to be resolved at these critical times are exclusively scientific, and one will struggle for as long as necessary to identify lines of inquiry that seem to offer the richest scientific prospects. Choosing a research program is especially difficult for genuinely exploratory research because one’s vision is limited to a few months at best by perennial fogs of uncertainty, but one hopes for the excitement of seeing glimpses of new horizons and indeed to be able to pose new questions rather than to merely confirm the predictions of well-established theories, essential as these processes of consolidation are. As ever, luck and choosing the right problem at the right time also play significant roles, as a perusal of Nobel Laureates’ lectures will confirm. Nowadays, however, scientists contemplating new crusades not only must consider such obvious questions as, “What should I do next?” but also, “Would my plans qualify for funding?” Scientific considerations might not even be the most important things on their minds. First, they must choose a field from a list of national or funding-agency priorities. Those whose interests do not fit neatly within them are not necessarily doomed to failure; however, the more Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

110

CARL WOESE

111

panoramic they allow their vision to be, the more their problems will escalate. Next, they should prepare against externally imposed deadlines the proposals that might convince a usually anonymous panel of fellow experts that the proposed explorations have been thoroughly planned and costed and have taken all anticipated problems into account including contingencies for the unexpected. As the competition is usually intense, one should also tick any boxes that might increase one’s chances of funding; that is, one’s proposal should contain the persuasive arguments on such subjects as contributing to areas of topical interest, that it will lead to socioeconomic benefits, or that it is otherwise a good value for the money. There is no recorded evidence that, for those so inclined, convincingly sexing-up a proposal diminishes one’s chances of getting funded. Scientists also have an overall duty to their profession; and that is constantly to review and assess the validity of what is known and, if necessary, even to challenge it. This duty is now seriously undermined by the new policies, as I will discuss further in Chapter 12. However, even before the new strictures were imposed, scientists still had to resolve the agonizing questions of precisely what they should propose to do. They could seek guidance; but that can beg the question, as one’s consultants will probably have reputations in specific fields; and their experience may color their advice even though they may strive to be impartial. As I discussed in Chapter 2, Max Planck was initially advised by his supervisor to avoid studying physics as it was no longer interesting; and when after graduation he chose to examine the problems of thermodynamics, a virtually unknown and unfashionable field, his VIP advisors were indifferent to say the least. Luckily, Planck had a mind of his own with the confidence and talent to match. In contrast, the young James Watson had no such difficulties when he was setting out. Following his graduation in zoology from the University of Chicago in 1947 at the age of 19, he joined the Phage Group and Salvador Luria (see Chapter 5) as his graduate student at Indiana University and immediately became inspired by the group’s mission to understand the mystery of the gene. In 1944, Avery showed conclusively that bacterial genes are made of DNA— deoxyribonucleic acid, a relatively simple if large molecule—and were not drawn from the vast and complex domains of the proteins; but most scientists found it such a shocking discovery that it had little impact on genetics for years. Then, in 1952, Alfred Hershey and his young assistant Martha Chase at the Carnegie Institution of Washington Cold Spring Harbor (Barbara McClintock’s lab; see Chapter 5) published their work on phage viruses (Hershey and Chase 1952). As was well known, a virus is not a stand-alone organism, but it needs to form an association with a bacterium, animal, or plant in order to survive and propagate. According to Peter Medawar’s wonderful definition, a virus may be thought of as bad news wrapped in protein, the bad news in this case being the “alien” DNA contained in the virus. Hershey and Chase found that when a virus infects a bacterium it leaves its protein coat outside the cell and

112

CARL WOESE

the virus begins to grow only when its DNA passes inside, thus providing completely independent verification of Avery’s revolutionary discovery. But that does not seem to have been their purpose as their publication did not even cite Avery and his colleagues. However, it marked a watershed; and the American biologist and yet another refugee from the Nazis, Gunther Stent, recalls that from that time on, all genetic thought was focused on DNA (Stent 1980, p. xv.) Thus, there was now no question that this molecule would play a key role in solving the mystery of the gene; and following a chance meeting with Maurice Wilkins of King’s College London (who was already using X-rays to study DNA) Watson set himself the daunting task of unraveling its structure. Chemically speaking, it had been known for many years that DNA—a nucleic acid—is comprised of four nitrogenous bases called nucleotides— adenine (usually represented by the letter A), guanine (G), cytosine (C), and thymine (T)—linked together with phosphate bonds and sugar molecules. But DNA’s architectural structure was still unknown as were many details of its chemistry; and when in 1951 Watson met Francis Crick and discovered that he was similarly inspired to understand how such a simple molecule as DNA could play a vital role in genetics, they agreed to collaborate on an intensive study at Cambridge University’s Physics department—the Cavendish Laboratory. It led in 1953 to their brilliant revelation that DNA is structured like a spiral staircase with handrails at each side that spiral around each other—the double helix (see Figure 12).* The key to another aspect of their solution was uncovered through many hours of playing with molecular scale–models that the molecular complex comprising “adenine hydrogen—bonded to thymine” would be identical in shape to the complex comprising “guanine hydrogen— bonded to cytosine.” Their spiral staircase would therefore have “steps” made from these specific nucleotide pairs; no other pairs had the same precise dimensions. They did not claim in their landmark 1953 paper that the problem had been solved, however, and confined themselves to making the following cautious remark (Watson 1953): It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.

This classical understatement (written apparently by Crick to mark their priority, as Watson was not convinced at the time that they had got the right structure for DNA) nevertheless launched a scientific revolution and the birth of molecular biology as a discipline. In 1951, however, DNA’s premier role in genetics still had not been universally accepted. Watson was convinced; but John Kendrew and Max Perutz, two * Its unfolding was far from straightforward, and many other scientists were involved. See, for example, Stent (1980).

113

CARL WOESE

Base pairs Adenine

Thymine

Guanine

Cytosine

Sugar phosphate backbone

Figure 12 A two-dimensional representation of the three-dimensional double helix. The two complementary base pairs are shown: adenine always pairs thymine and guanine with cytosine. Each “step” is the same length, and the turn rate is always 36 degrees. (Source: U.S. National Library of Medicine.)

senior staff members in the Cavendish, among others, were openly skeptical. They were pioneers of X-ray crystallographic studies and in 1962 won the Chemistry Nobel Prize for their protracted and frustrating struggle extended over many years that eventually led them to unravel the three-dimensional structures of proteins such as hemoglobin and myoglobin. They were clearly not averse to tackling intractable problems, therefore. Ironically, Crick and Watson were to win the Physiology and Medicine Nobel Prize in the same year! But Watson was a brash young and inexperienced American postdoc in his early twenties. Crick was trained as a physicist, but he had had his career held up by war work on magnetic and acoustic mines, was 35, and did not even have a PhD. Neither was he an expert in X-ray crystallography. The head of the department was Sir Lawrence Bragg, the youngest person ever to win a Nobel Prize (awarded in 1915 when he was only 25) and another Australian for work on X-ray crystallography; so he really knew a thing or two about the problems these two enthusiastic and determined amateurs would have to

114

CARL WOESE

solve. It would also seem that Bragg and Crick crossed swords frequently, as Crick was something of a loose canon and was never afraid to speak his mind no matter how distinguished the company. From today’s perspectives, colored and distorted by cold calculations, priorities, and anonymous peer preview, it is therefore almost incredible that despite all this Bragg would back them and ensure that they were free to follow their intuition. Furthermore, few heads of the department today would be in a position to make such an offer, as they are usually just as much prisoners of the new policies as anyone. However, complex relationships have often spiced the evolution of science, and Bragg may possibly have seen them as winners who could help in his long-running and intense feud with Linus Pauling,* the great American scientist with whom he had frequently competed and lost, most notably on the structures of large silicate molecules and the α-helix structures of proteins. Bragg knew that Pauling had set his sights on DNA and was very anxious not to lose again. His gifted renegades might therefore have seemed a good bet! Crick and Watson’s success launched a torrent of studies on precisely how DNA was able to orchestrate and control the multitude of molecular mechanisms that allow living cells to metabolize, mechanisms that seem largely invariant across the entire spectrum of life. These include, for example, such details as how base pairs code for genes whose expression stimulates the production of amino acid sequences necessary for protein synthesis and how its structure allows cells to replicate themselves with meticulous accuracy over many generations so that organisms grow and develop. These studies confirmed that life’s almost boundless complexity could indeed originate in a molecule with an apparently breathtakingly simple structure, but the unfolding of the story was convoluted with many false starts, twists, and turns. The general principles and many details have now been confirmed; but even today, understanding of cellular metabolism is far from complete. No wonder many scientists did not immediately accept Avery’s revolutionary discovery! However, in contrast to today’s virtually universal policies, no central authority stepped in to prioritize, coordinate, or direct the studies; and the story emerged from the unconstrained creativity of many individual scientists each tackling an aspect of the problem that interested them because of its intrinsic merit and personal appeal. The emerging science has also has been well described by many authors, but I would particularly recommend the chapter on DNA in Nick Lane’s superb book (Lane 2009, p. 34). Carl Woese (1928–2012) received his PhD in 1953 just as Crick and Watson were publishing their iconic results on DNA structure. As I will explain, Woese’s unorthodox research apprenticeship would seem to have stimulated the realization that, while the identification of the molecular basis of the

* Pauling won the Nobel Prize for Chemistry in 1954, as well as the Nobel Peace Prize in 1962.

CARL WOESE

115

genetic code was important, the study of biology may not always be amenable to such “reductionist” approaches, that is, those approaches that strive to understand the problems at the most fundamental molecular levels. Against the background of such a profound observation, he not only had to choose a specific problem but also to begin the huge task of working out a framework within which he could operate and plan his future research. Woese’s training was certainly unusual. He received a first degree in physics and mathematics at Amherst College in 1950 and a doctorate in biophysics from Yale University in 1953. For Woese, Crick and Watson’s discoveries exemplified the differences between the molecular perspective (reductionism) and what he saw as that of the classical biologist. According to Woese, molecular biologists ignored evolution and the nature of biological form, either failing to recognize them or dismissing them as inconsequential, as historical accidents, fundamentally inexplicable and irrelevant to our understanding of biology. As he later argued, however, his views perhaps colored by the heightened perspectives of hindsight (Woese 2004): Now, this should be cause for pause. Any educated layman knows that evolution is what distinguishes the living world from the inanimate. If one’s representation of reality takes evolution to be irrelevant to understanding biology, then it is one’s representation, not evolution, whose relevance should be questioned!

Thus, Woese believes that a biology revealed only by reductionism would be incomplete. As he colorfully put it: Knowing the parts of isolated entities is not enough. A musical metaphor expresses it best: molecular biology could read notes in the score, but it couldn’t hear the music.

His central problem was that, although a fragment of DNA (a gene) coded for the amino acids necessary to make a protein, it does not follow that we understand the code. Why, for example, were specific codons (the name given to each of the 64 possible triplets of nucleic acid combinations such as AAG*) assigned to specific amino acids (a question which is usually referred to as the translation problem)? He thought that we would only understand the genetic code when we understood the processes and forces that had created it and how it had evolved to its present state. But what precisely should he do about it? In 1964, Woese joined the faculty of the University of Illinois and started his ambitious work on the evolution of translation. At that time, there was no truly universal phylogenetic (evolutionary) framework. Animal and plant phylogenies were reasonably complete, but the vast bacterial world was effectively * There are four nucleic acids—adenine, thymine, guanine and cytosine—so that the maximum number of triplet combinations is 43, or 64.

116

CARL WOESE

virgin territory in these respects. C. B. van Niel of Stanford University’s Hopkins Marine Station had tried for years in the 1930s to classify bacteria by their shape and metabolism, but he and his former student, Roger Stanier of the University of California, Berkeley, had concluded that: The ultimate scientific goal of biological classification cannot be achieved in the case of bacteria.

They had also described this deplorable state of affairs as “an abiding scandal” and attributed it to the absence of a clear concept of what actually constitutes a bacterium. In Woese’s view, the lack of a classification consigned microbial research to the Dark Ages. He said: It was as if you went to a zoo and had no way of telling the lions from the elephants from the orangutans—or any of these from the trees.

At the beginning of the twentieth century, the scientific consensus was that there were two major domains of living systems: the prokaryotes (bacteria, which have small, very-simple cells) and the eukaryotes (animals, plants, and fungi), which have large, extremely complex cells with a central nuclei confining (and protecting!) their DNA. The tree of life—the evolutionary story of life on Earth—was therefore thought to have only two main branches, and according to convention there was no reason to think that other domains were possible. But there could be no question that bacteria had played a vital role in the evolution of life, and the absence of data on bacterial evolution therefore indicated a vast domain of ignorance in an important field, a closed book that for Woese was begging to be opened if only he could find a key. Physicists often have been attracted by difficult problems in fields where their specific training has little relevance or value, as Francis Crick had been and Woese was about to be. Some might call it youthful arrogance, but whatever it is, how else can the big intractable problems ever be solved other than by “outsiders” from whatever fields? This class of dissidents can also include those who believe that they know better than their teachers because those whose professional training has steeped them in the prevailing dogma have clearly failed to make progress. In today’s world, these questions pose difficult problems for nonconformists as experience and training in established ways of thinking play important roles in deciding who gets funded. What chances, therefore, might such irreverent mavericks as Crick and Watson, and Woese have today? Luckily, Woese’s crusade starting in the 1960s coincided with the National Aeronautics and Space Administration (NASA) becoming interested in the origins of life. NASA turned out to be an enlightened sponsor. It supported him through thick and thin; and there was a great deal of the latter, with modest grants of $50,000 per annum for many years. It was indeed lucky that Woese had not been trained in microbiology, as he probably would have

CARL WOESE

117

been put off by the difficulties as others had been*; but he clearly faced an uphill struggle merely to find an intellectual infrastructure within which he could operate. His first requirement was a viable scheme for classifying the bacteria, the obvious choices of shape and metabolism having failed, as van Niel had shown. For this, he turned to ribosomal RNAs (rRNAs) as he knew from his own postgraduate work before he went to Illinois that these molecules were found in most organisms and searching for variations in them could make good indicators of a bacterium’s evolutionary history. RNA—ribonucleic acid—is another nucleic acid, of course, with a very similar structure to DNA except that it is a single helix and comes in a range of forms—(see Figure 24, Chapter 11). Messenger RNA (mRNA) is the molecule that copies fragments of DNA (a gene say) and, in a eukaryote, can be exported from the nucleus. Ribosomes are the sites in the cytoplasm where proteins are synthesized, and are common to all forms of life. Thus, in protein synthesis, for example, a molecule of mRNA carries the genetic information on a protein’s amino acid sequence from the DNA to a ribosome, and that ribosome’s RNA component (rRNA) decodes the mRNA and catalyzes the production of the specific amino acids. However, searching for RNA variations was easier said than done at that time (mid 1960s) and his great idea could easily have died on the vine if he not had an extraordinary stroke of luck. Fred Sanger, the British biochemist working at the Cambridge Laboratory for Molecular Biology, (see Poster 5) had developed a technique for sequencing RNA (that was in 1980 to lead to his second Chemistry Nobel Prize, the only person ever to be so honored) that seemed to fit the bill. Called, “oligonucleotide cataloging,” it was a biochemical cutand-paste technique that enabled him to reconstruct very short sequences of bacterial RNA and store them on film. It was, at that time, an extremely slow and tedious technique. Woese, working alone for many years, used it to build up a huge library of sequence data that had the appearance (to the uninitiated) of blurred spots on thousands of rolls of film, a library that festooned every surface of his office and lab. He spent his time; lots of lonely time, searching for patterns using his light table and occasionally making modest progress, particularly on such cellular organelles as chloroplasts and mitochondria that were thought to have originated long ago as stand-alone bacteria. Nevertheless, despite this slow progress (some might say lack of progress!) his essential NASA funding continued uninterrupted. In 1976, Woese had another enormous stroke of luck in that Ralph Wolfe, an expert in methanogens (organisms that produce methane), had an office down the corridor from Woese at Illinois and had wondered whether Woese’s * Woese might have thought similarly to another flamboyant iconoclast, Lynn Margulis (University of Boston), who later (2006) remarked that “microbiology is mostly not science, it’s practical art. That comes from its beginnings with Pasteur. Microbiologists are pragmatic, pious businessmen. There’s no intellectual tradition in microbiology.”

118

CARL WOESE

Poster 5: The UK Medical Research Council (MRC) Laboratory of Molecular Biology Max Perutz, the Austrian-born British molecular biologist, came to Britain before the war as a refugee from Nazi Germany. After showing some of his early results to Sir Lawrence Bragg, he become so enthusiastic that he invited Perutz to come to the Cavendish and arranged funding for him from the Rockefeller Foundation—that wonderful organization whose actions fully endorse the fact that the pursuit of science should recognize no boundaries, national or disciplinary. After the war, Perutz reports that Bragg persuaded the Secretary of the UK Medical Research Council (MRC), Sir Edward Mellanby, at a lunch in the prestigious London Athenaeum Club (the highly agreeable and time-honored way for the British “great and the good” to seal their deals) to fund a new unit at the Cavendish led by Perutz (Perutz 1970). Bragg told Mellanby that Perutz and Kendrew were on a treasure hunt with only the remotest chances of success. If they were successful, they would create an insight into the workings of life on the molecular scale; but any benefits to medicine, should they come, would take many years. Mellanby was convinced! How times have changed! Nowadays, such a decision could take years and would probably involve complex caveats on the unit’s purpose, its location, and how it should be managed. In contrast, the unit was formed almost immediately, staffed solely by Perutz and Kendrew at first; but it soon grew, as did its success. In 1953, Crick and Watson solved the problem of DNA structure, shortly followed by Kendrew and collaborators who solved the structure of myoglobin, and Perutz and his collaborators that of hemoglobin. In 1962, the unit moved out of the Cavendish to a nearby site in Cambridge and became the MRC Laboratory of Molecular Biology (LMB). The LMB became one of the world’s most spectacularly successful labs. MRC ensured that the lab’s administration was as simple as possible. Performance was subject to triennial reviews by an external committee and carried out with the lightest of touches, requiring only brief explanations of what had been done and indications of future plans. The committee’s recommendations were largely advisory, thereby giving division leaders a free hand in running their affairs and sensibly acknowledging, therefore, that they would know best what should be done. Perutz was its first director, but he set the tone for the lab and its administration in interpreting his role as a facilitator of the performance of science, not as its director. The LMB had a single budget that everyone shared (for example, consumables could be drawn from a well-stocked store with no more than a signature) and MRC also provided state-of-the-art equipment. The lab had no overt hierarchy. Sydney Brenner succeeded Perutz as Director in 1979 and followed Perutz’s superb example. I visited the lab shortly afterward when I was

CARL WOESE

119

setting out on my Venture Research crusade (see Chapter 12). Brenner’s office was indeed indistinguishable from anyone else’s, and he had no secretary. He told me that the lab was founded on the “Rockefeller principle” of backing good people to do whatever they wished and was very much “hands on.” Astonishing as it may seem to today’s world, it employed only one or two administrators because “everyone does his own work.” Brenner was studying the genes of the nematode Caenorhabditis elegans, the tiny worm with only about 1000 cells, which he used as a model organism to help to understand the genetics of more complex mammalian organisms. That work led subsequently to three Nobel Prizes (Brenner, Sulston, and Horvitz, 2002; Fire and Mello, 2006; and Chalfie, a former research student of Brenner’s who had moved to the University of Columbia, in 2008). It also led, through such considerations as apoptosis or programmed cell death, to an explosion in the number of fertile new lines of inquiry in medicine and agriculture. Indeed, the lab has so far spawned a total of nine separate Nobel Prizes shared by 13 in-house scientists, and a further eight for work initiated or done at the lab by visitors. Such an impressive record would seem to speak for itself; it should not need further justification. Yet Brenner said in his Nobel Lecture (2002): Such longterm research could not be done today, when everybody is intent only on assured short term results and nobody is willing to gamble. Innovation comes only from the assault on the unknown.

However, this cri de cœur from a scientist whom his international peers had just agreed should be given science’s supreme honor has apparently been ignored. When Brenner started out, C. elegans was widely regarded as a joke organism, and any assessment he might have produced of the potential economic or social impact arising from his work, therefore, would have been more unlikely to amuse than impress third parties. In addition, the LMB today is subject to the MRC’s new and revised Royal Charter, which obliges it to do the research that meets the needs of users and beneficiaries. As I write in 2013, the LMB is about to be moved to a new and fashionable £212 million building that includes a huge atrium—the new lab has been dubbed the Crystal Palace by some perceptive observers—and which will form part of the Cambridge Biomedical Campus. The emphasis will be on the development of new technologies, the study of basic biological processes, and the use of this knowledge to tackle specific problems in human health and disease, including the development of new therapeutic agents.

Glitzy is not necessarily bad, but why have these substantial policy changes been made? What arguments were made to justify them? The lab’s

120

CARL WOESE

former ethos emphasizing insight wherever it might lead could hardly have been more scientifically successful, and moreover, millions of people worldwide have also benefitted in ways nobody could have predicted. No doubt some commercial companies will now enjoy a considerable number of short-term gains, which in itself is a good thing, but the lab was never famous for focusing on quick fixes, and the losses to basic exploratory science that Brenner so eloquently lamented will probably be incalculable.

bizarre technique might shed some light on these organisms. Wolfe knew of about eight such organisms but little was known about how they were related to one another or to other microbes. They had diverse morphologies, but all seemed to have similar biochemistry in that they grew anaerobically by oxidizing hydrogen and reducing carbon dioxide to methane. Woese knew from van Niels’ work that morphology was not important, but their biochemistry intrigued him. He could not have made a wiser choice! Most scientists are obliged to proceed nowadays by using, in effect, narrow laser-like beams in the hope that, if pointed in the right direction, might illuminate tiny fragments of the unknown. Woese’s decision, however, had effectively turned on a massive broad-beam searchlight to reveal a completely unexplored wilderness. His results astounded him as the methanogen’s rRNA had genetic sequences that were untypical of any bacteria or indeed any eukaryotes he had seen. Indeed, they were so different that they seemed to indicate another domain of life, which initially he called the archaebacteria. As his work continued, he realized, however, that they were not bacteria at all, and called them “archaea.” His revolutionary discovery, published in 1977 in collaboration with Wolfe, was greeted with skepticism and doubt (Fox 1977). As Virginia Morell (1997), writing in Science said: Woese’s solitary years at his light table had left him with a reputation as an odd person, “a crank, who was using a crazy technique to answer an impossible question.” . . . Molecular biologist Alan Weiner of Yale University recalls that many leading biologists thought Woese was “crazy,” and that his RNA tools couldn’t possibly answer the question he was asking. .  .  . Few said anything to Woese directly, or even responded in journals. “The backlash was rarely if ever put into print,” says Woese, “which saddens me because it would be helpful to have that record.” Instead, many researchers directed comments to Wolfe, who was well established and highly regarded. Recalls Wolfe: “One Nobel Prize winner, Salvador Luria, called me and said, “Ralph, you’re going to ruin your career. You’ve got to disassociate yourself from this nonsense!” Ernst Mayr of Harvard University scoffed to reporters that the notion of a third domain of life was nonsense, an opinion that he and a handful of other skeptics hold to this day. “I do give him credit for recognizing the archaebacteria as a very distinct group,” says Mayr,

121

CARL WOESE

who insists on keeping the word bacteria attached to the Archaea. “However, the difference between the two kinds of bacteria is not nearly as great as that between the prokaryotes and eukaryotes.”

The tide turned a few years later, and the archaea are now accepted as established fact (see Figure 13). They are now found extensively—in extreme environments such as hot ocean vents, hot springs, and highly alkaline or acidic waters including the guts of ruminants and humans. They are also found in the oceans; indeed the archaea in plankton may be one of the most abundant organisms on Earth. However, Woese’s story remains highly controversial. Unfortunately, his work has a possible limitation in that, in effect, it is based on a search for variations in a single gene—that for ribosomal RNA. But if the studies are extended to other genes or genomes from bacteria and eukaryotes, as is possible using today’s techniques, another picture of the evolution of life on Earth emerges (see Figure 14). Ernst Mayr, the highly distinguished American biologist who died in 2005, never accepted Woese’s proposed third Domain, or Empire as he called it. He argued in a paper published by the US National Academy of Sciences that the vast majority of the characteristics of the two types of bacteria were so similar and so fundamentally different from the

Phylogenetic Tree of Life Bacteria

Spirochetes Proteobacteria Cyanobacteria Phanctomyces Bacteroides Cytophaga Thermotoga Aquifex

Archaea Eucaryota Green Filamentous Slime bacteria Entamoebae molds Animals Fungi Gram Methanosarcina positives Methanobacterium Halophiles Plants Methanococcus Ciliates T. celer Thermoproteus Flagellates Pyrodicticum Trichomonads Microsporidia Diplomonads

Figure 13 A phylogenetic tree of living things, based on RNA data and proposed by Carl Woese, showing the separation of bacteria, archaea, and eukaryotes. (Source: Wikipedia.)

122

CARL WOESE

Figure 14 A schematic diagram of the ring of life proposed by Rivera and Lake. The Last Universal Common Ancestor of all life emerges at the bottom of the ring, which splits into archaea to the right and bacteria to the left. These two branches later combine to form eukaryotes. (Source: Rivera and Lake, 2004.)

eukaryotes that they must be ranked as a single taxon, the prokaryotes (Mayr 1998). In a publication in the same journal, Woese argued that their disagreement was not about classification, but concerns the nature of biology itself (Woese 1998). He said: Evolution for Dr. Mayr is an “affair of phenotypes. ” For me, evolution is primarily the evolutionary process, not its outcomes. The science of biology is very different from these two perspectives, and its future even more so.

One of today’s most pressing problems is the question of how such a complex cell as a eukaryote could evolve from what had gone before. One of the most promising theories is that in the remote past, one organism engulfed another; that is, a bacterium, say, engulfed an archaea. However, Woese argues that such a process would involve radical changes in the designs of the cells involved: You can’t just tear cell designs apart and willy-nilly construct a new type of design from the parts. The cells we know are not just loosely coupled arrangements of quasi-independent modules. They are highly, intricately, and precisely integrated networks of entities and interactions. Any dismantling of a cell design would not reverse the evolution that brought it into existence; that is not possible. To think that a new cell design can be created more or less haphazardly from chunks of other modern cell designs is just another fallacy born of a mechanistic, reductionist view of the organism.

CARL WOESE

123

He may be right, but I doubt if he would oppose reductionism absolutely. The issues are still unresolved, therefore, but their importance is beyond doubt. The vast diversity of the eukaryotes—animals, plants, and fungi—has an easily recognizable structural basis, whereas we know little about the full the range of microbial diversity. We do know, however, that microbes have mastered the art of living and thriving in virtually every possible ecological niche on Earth, no matter how hostile it may appear to us (see Poster 6). Microbial life has been around for billions of years, during which time it has not only transformed the planet—its atmosphere, for example, would have almost no oxygen without it—but has developed an enormous range of novel biochemistries that we know little about but whose largely untapped reservoir of capabilities is highly likely to have many practical and valuable applications and insights that we cannot currently predict. Catalysts, for example, mediate many industrial processes, but they are often fragile and prone to breakdown. Microbes, on the other hand, have over the eons developed their own

Poster 6: Ecological Niches An ecological niche is much more than a hole in the wall. Every organism from any of Nature’s domains requires supplies of energy and nutrients in accessible forms and sufficient quantities to ensure it can function and survive. Availability of oxygen and tolerance to it are also often important; organisms are called aerobic if their metabolism depends on oxygen and anaerobic if it does not. In addition, the natural variations of such environmental properties as salinity, acidity/alkalinity, temperature, and pressure should also be acceptable to the organism. All these factors go into defining an organism’s ecological niche; and in principle, access to those essential supplies within a favored niche should be free from competition. Indeed, the greater the competition, the lower the prospects for survival. Habitable niches can vary enormously. Thus, extremophiles such as the archaea Sulfolobus acidocaldarius thrive at 80°C and can survive in boiling water (Madigan and Marrs 1997); and Pyrolobus fumarii, which lives in the walls of black smokers deep in the oceans, is happy at 113°C (indeed it stops growing at temperatures below 90°C)! At the other extreme, the optimal growth temperature for the bacterium Polaromonas vacuolata is 4°C, the temperature at which water has its maximum density; and it cannot reproduce above 12°C. Another archaea, Halobacterium salinarum, thrives in highly alkaline saline environments. However, the only species living today that enjoys a completely competitor-free ecological niche is Homo sapiens, although it might have had competition in the past. That niche is exclusively intellectual, of course.

124

CARL WOESE

catalysts—enzymes—that reliably and routinely operate in the most hostile environments on Earth. However, although the number of known microbial species globally has increased some fourfold in the last couple of decades as automated gene sequencing technology has become increasingly available, the Earth is still a largely unknown planet as far as microbes generally are concerned. It is estimated conservatively that about 99% of the full microbial spectrum remains to be discovered, and most of those we know about have yet to be closely examined. Most bacteria, and all known archaea, are not pathogenic to humans—quite the opposite. Furthermore, the concept of what it is to be human has recently taken a new turn.* It has been discovered that each of us, members of Homo sapiens comprising some 10 trillion cells, is controlled by some 23,000 genes, but also is home to some 100 trillion microbes controlled by some 3 million genes. Almost all of these cohabiting microbes have now been identified, but the full range of their relationships both with each other and with our human cells is mainly unknown. It would also appear that we each have our very own special relationship with our personal microbial fauna, relationships that can be critical to health and well-being. This subject is so important that the US National Institutes of Health recently set up the Human Microbiome Project with the goal of characterizing the microbial communities that inhabit and interact with the human body in sickness and in health. Medicine is likely to be eventually transformed. However, the archaea, a vital component of the planet’s evolution and its microbial population, might still be unknown if Woese had not begun his lonely crusade all those years ago. He did so at a time when scientists were free to challenge convention without having to seek the endorsement of the very people who have helped formulate those conventions. If he had had to work under today’s rules, it is a moot point whether his controversial proposals would have qualified for funding; but the odds would seem heavily stacked against it. The huge task of understanding the truly vast range of complex interrelationships between the Earth’s multitude of microbial species and those with its equally complex environment is daunting. Present-day scientists with Woese’s disregard for the orthodox, for peers’ opinions, and for the need to produce short-term results will not lack for significant scientific problems to tackle, but who will fund them? Woese has argued against reductionism; but I hope he would agree that, in designing their research, scientists should always aspire to achieve completeness even though they may know that in science that goal, like perfection, can never actually be achieved. Thus, they should aim to consider every conceivable perspective on their problems in searching for solutions. An important aspect omitted or deliberately ignored can only be expected to create difficul-

* See HMPC (2012).

CARL WOESE

125

ties. Nature has demonstrated again and again that she is very subtle and rarely if ever yields to direct confrontation. Scientists must therefore strive to cover all their problems’ possible bases and be prepared to rethink their strategies if experiments reveal unanticipated results thereby suggesting that their original thinking might be flawed. The best and indeed the only way to assemble this unpredictable mix of necessary expertise is by backing individual insight, creativity, and flair. One effective way of doing that at modest cost would be to allow qualified academic researchers to tackle problems from any direction using any techniques or approaches that seem relevant to them at any particular time. After all, it is not as if they are asking for something for nothing. They would be dedicating their careers to such ventures, and that is the most priceless resource they have. It is sad indeed that funding agencies today never acknowledge this simple fact. In genuinely exploratory research, attempts to achieve completeness by direction from the center, which is invariably based on consensus opinion, will almost certainly fail if only for the simple reason that no one can know beforehand what lines of attack or ways of thinking will be most effective. Backing the widest accessible range of creative individuals maximizes the probability of successful outcomes as it leads to advances on the broadest possible fronts, but of course one cannot know in advance precisely what those outcomes might be. Governments and funding agencies usually see these things differently. Such comments as “We can’t afford to do everything, so therefore we must prioritize” tend to be the norm. But resources have always been strictly limited and always will be. Per-capita spending on research today is vastly higher than it was before approximately 1970, but researchers then were largely free to use their very-available, but usually very-modest, funding as they pleased. The resulting unpredicted scientific and economic harvests were spectacular. Hopefully, we can find ways of restoring some of that freedom in the future.

9 Peter Mitchell: A High-Minded Creative and Courageous Bioenergetics Accountant

Energy generation and exchange are central to every process in the universe. Most observations are based on light from the visible part of the radiation spectrum that interacts with our eyes or instruments, but that spectrum is virtually infinite and much of it is still unexplored. Each of us is bombarded by many billions of neutrinos every second. These are the most weakly interacting particles known, so the overwhelming majority continues harmlessly on their way. The minutest fraction, however, do not; and we have little idea of the effects they might cause. Strongly interacting cosmic rays also arrive in large numbers, but the Earth’s atmosphere and magnetism can be efficient shields. We can sometimes see the spectacular evidence of their effectiveness in the rippling aurorae. Some of these rays are more than ten million times as energetic as the highest we can generate on Earth, but we do not know how they are produced or from where they come. It is my current understanding that the universe should also be full of reverberating gravitational waves arising from the possible infinity of cataclysmic upheavals that have punctuated its long and turbulent history. Detection of these ghostlike waves would create perhaps the ultimate test of human ingenuity. Even with equipment capable of detecting strain sensitivities* of 10−21, so far we have not been able to identify a single wave positively. * These are the fractional changes to the dimensions of the detecting equipment that an incoming gravitational wave might produce. It is roughly equivalent to measuring the distance to the nearest star (∼4 light-years) to an accuracy of about 50 microns. Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

126

PETER MITCHELL

127

But all these remarks apply only to directly observable phenomena. Increasingly sensitive astronomical measurements made since about 1990 have revealed that we are able to see only ∼5% of the energy/matter in the universe—dark matter comprising ∼70% while dark energy accounts for ∼25%. But these proportions are also subject to considerable uncertainty, because the existence of these “dark” substances can be only inferred, of course. Little is known about this huge gap in knowledge, and even the widely held view that much of the dark matter must be “nonbaryonic” (i.e., not made up of protons and neutrons) is an unconfirmed hypothesis. But our planet and every living thing on it are immersed in and indeed are comprised of this largely unknown fabric, of course. Our ignorance is at least 95% complete, which, for the ambitious, ingenious, and courageous scientist, is a truly a targetrich environment. In these circumstances, carefully formulated research proposals can hardly fail to uncover new knowledge eventually. Why is it necessary, therefore, to prioritize academic research? All except ∼0.03% of the Earth’s observable energy income comes from nuclear reactions within the Sun, driving a multitude of processes in the Earth’s atmosphere and oceans that are also closely and variably intertwined. Formed just over 4 billion years ago, the Earth’s complex infrastructure was, during the following few hundred million years, colonized extensively by single-cell microbial life. The events that led to this miraculous and rapid transformation, at least on the geological timescale, are currently the subject of intense debate; and there is no general agreement on how our once-sterile planet came to make the dramatic conversion to fertility. Nevertheless, the by-products of its teeming microbial population going about its routine metabolic business slowly, inexorably, and radically changed the environment that gave birth to it. In the beginning, the Earth’s atmosphere was virtually devoid of oxygen, but some 2 billion years later it had made good progress toward being as oxygen rich as it is today. This change was momentous. It is surely no exaggeration to describe oxygen as the molecule that made the world, as its presence seems to be an essential precursor for complex life (Lane 2002). It cannot create it, of course, but as I outlined in the last chapter, the microbial interactions that might have led to such complex, multicellular life as plants, fungi, and animals are also being intensively studied, but there is little general agreement on how it came about. (Indeed, I am currently involved in such a study at University College London.) By the way, all this uncertainty pales into insignificance when it comes to the origins of intelligent life—in which intelligence is the determining factor—and of which our species Homo sapiens is the only surviving example, as we know almost nothing about how it arose. If we imagine for a moment that the planet Earth might be capable of sentience, it would not be surprising to such a hypothetic entity that its latest tenants might be capable of causing disruption, and that humanity’s activities since the Industrial Revolution, say, are having a global impact. It should obviously be tiny compared with the long-sustained efforts of the multitudinous

128

PETER MITCHELL

microbes, so a judgmental Earth might find it difficult to distinguish the signal we might be generating from the usually chaotic noise from its own ultracomplex systems and, of course, the perpetual rumblings from the mighty cosmos, too, and attribute any specific changes categorically to us. Some readers might object to my fanciful implication that the Earth might behave physiologically; but in this, as in so many other things, I am a mere follower. James Lovelock (see Poster 7) is perhaps the most ardent proponent of this hypothesis and for many years has proposed that Earth might be considered as a single organism— to which he has given the name “Gaia”—an entity that is alive to the extent that its temperature and chemistry are regulated in ways similar to those of other living organisms. Such an Earth may or may not consider that we are adding to its troubles, but many might think that we are at a disadvantage in fathoming its opinion, as we do not have the commanding, all-seeing aspect that an entire planet enjoys. But scientists should always aspire to such a per-

Poster 7: James Lovelock James Lovelock was born in England in 1919, following which the development of every aspect of his career has been unorthodox. To illustrate, he was still a student when World War II began and, therefore, entitled to automatic exemption from military service. However, he was ideologically opposed to the war and took the dangerous step of registering as a conscientious objector anyway, perhaps to express solidarity with those who felt similarly but were not lucky enough to be exempted. But he was exempt and continued with his research at the Medical Research Council (MRC) on mitigating the effects of burn wounds and injuries. However, when he heard about the Nazi atrocities, he asked to be conscripted. He was turned down on the grounds that his research was too valuable! After the war, he worked at Yale and Harvard Universities and was a consultant to the National Aeronautics and Space Administration (NASA) on the search for life on Mars. Returning to England, rather than become an academic, he decided on a career as a fully independent researcher, often collaborating with another independent mind, the American scientist Lynn Margulis. But he took this remarkable course even though he was by no means financially independent. His funding requirements, for basic research performed for decades in a converted barn at the edge of Dartmoor in southwest England, were always modest and met ad hoc from whatever sympathetic source he could find. All this is summarized in Lovelock (2010). His ideas have attracted a great deal of public interest but many scientists are skeptical. Nevertheless, his scientific credentials are impeccable. He has published numerous papers in the peer-reviewed literature and was elected a Fellow of the Royal Society in 1974.

PETER MITCHELL

129

spective, of course, whether they agree with Lovelock’s metaphors or not. Indeed, they should strive to take the loftiest, most-universal perspective imaginable if their work is to have other than parochial significance. My attempt to take such a lofty view of the global warming problem is given in Appendix 2. Deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) are essential to the specification and replication of all living organisms, the emergence of which elaborate story is one of the triumphs of twentieth-century biology, a story that, of course, is far from complete. But these molecules, their complex interactions, and indeed the thousands of components of a cell’s metabolism can neither form nor function without a supply of energy, a supply that within a cell was thought to come solely from the making and breaking of chemical bonds. Adenosine triphosphate (ATP) is the molecule at the heart of these processes, and in effect is the energy quantum of all forms of life.* It was discovered in 1929 by the German biochemist Karl Lohmann, another transformative discovery that sadly was not recognized by the award of a Nobel Prize. During the following few decades, searches for the mechanism by which organisms synthesize ATP proved fruitless, and the problem was considered to be one of the great unsolved problems in science. Not surprisingly, biochemical researchers were virtually unanimous in their assumption that like many other cellular processes the mechanism would also be biochemical. As they say, when all you have is a hammer, everything looks like a nail. Accordingly, therefore, the prevailing view was that scientists should concentrate on searching for the sequence of reactions involving energy-rich chemical intermediates whose end point would be the Holy Grail of ATP. Peter Mitchell eventually proved that this apparently reasonable strategy was also totally wrong. I got to know him quite well during the late 1980s, and it was astonishing to discover that such a mild-mannered, polite, and urbane person was once at the very heart of perhaps the bitterest extended and occasionally vitriolic scientific controversies of the twentieth century. It is a fascinating story. Born in 1920 to affluent parents, Mitchell attended Queen’s College Taunton in Somerset and Jesus College at the University of Cambridge. However, he was a notoriously poor examinee; and he did so badly in his Cambridge entrance examination that he was admitted in 1939 only after his old Headmaster, a well-known mathematician, wrote to the Master of Jesus pleading for Mitchell’s acceptance. After graduation, in which again he did not excel, he started on a PhD that initially was directed by the wartime chemical defense program. His work included searches for antidotes to poisonous gases and seeking to understand penicillin’s therapeutic modes of action, thereby stimu-

* Energy production processes within bacteria and archaea are slightly different. See Chapter 5 of Lane (2005) for an excellent review.

130

PETER MITCHELL

lating his lifelong interest in cellular membranes and the mechanisms by which they ensure that the outside world is kept outside except for deliveries essential to the cell’s survival. His first PhD thesis was rejected, apparently for lack of supporting data, and he had to resubmit. It was hardly an auspicious way to begin a career. His Royal Society biographer and one-time arch rival, originator of the biochemical solution to the ATP problem, was the Australian-born biochemist E. C. (Bill) Slater, who was born in 1917 and who spent most of his career at the University of Amsterdam (Slater 1992). He said, with his typical magnanimity: Why was it that one who, in the eyes of his contemporaries, was clearly especially gifted and imaginative and a hard worker, failed to impress his examiners? Was he too clever for his examiners, as a very distinguished colleague was heard to remark when he heard about the Ph.D. thesis? In any case, Peter Mitchell (like Einstein) is a classical example of the subsequent career belying the results of his formal academic record. It should be added, however, that Mitchell’s apparent inability to get his thinking across to others, especially those in established positions, continued into his later career. He was by no means indifferent to this; indeed, it may have been a contributing factor to periods of deep depression.

Nevertheless, Mitchell did impress some VIP scientists at Cambridge, notably Michael Swann (later Lord Swann) and Fred Sanger, the subsequent double Nobel Prize winner in chemistry, who noted that: Peter had an original idea on every subject and we all knew even then that he would possibly change science.

Coming from Sanger, that was praise indeed. Swann was appointed professor of zoology at the University of Edinburgh and, impressed by Mitchell’s work on bacterial membranes, invited him in 1955 to set up a chemical biology unit in his department. Mitchell accepted and invited his Cambridge collaborator, Jennifer Moyle, to join him. Moyle was, in the words of Mitchell’s biographers, John Prebble and Bruce Weber (2003, p. 46): A superb, precise, and orderly experimentalist with a keen analytical mind, whereas Mitchell had a creative vision and an instinct for asking fundamental questions.

Their collaboration lasted until her retirement in 1983 (see Figure 15). Mitchell took what he called “an outsider’s interest” in the problem of ATP synthesis. Technically, there are two very similar problems, that of oxidative phosphorylation, “ox phos” in the jargon (the processes by which mitochondria produce most of the energy in animal cells) and photosynthetic phosphorylation (a similar process by which plant chloroplasts harvest the energy from the Sun). It is surprising that such vastly different forms of life as animals and plants, and indeed life in all its all forms, should have very similar mecha-

PETER MITCHELL

131

Figure 15 Photograph of Peter Mitchell and Jennifer Moyle taken at their Cambridge laboratory in 1946. (Reproduced by permission of Peter Rich in his capacity as the Chair of the former Glynn Research Foundation.)

nisms for trading energy. It further indicates that Nature tends to converge toward similar solutions whatever the environment.* Efficiency and flexibility seem to be the ultimate tests. However, Mitchell approached the problem physiologically, that is taking a top-down, coherent approach that was similar in many ways to that taken independently by Carl Woese in his microbiological studies described in the Chapter 8. The key to Mitchell’s ideas came from his wartime work on penicillin and bacterial membranes. It was well known that ATP is generated in mitochondria and chloroplasts (see Figure 16), cellular organelles that were once bacterial in origin; so therefore ATP must be transported across their membranes to wherever it is needed in the cell’s interior (the cytosol). But that means it must necessarily be a directed process occurring at specific locations, whereas as then envisaged, cellular biochemical processes were presumed to be scalar and could happen anywhere in the featureless soup of enzymes and other molecules distributed throughout the cytosol. Accordingly, Mitchell proposed the radically new concept of “vectorial chemistry,” a concept based at that stage entirely on theoretical considerations of the problem rather than data. It is fascinating that, although Mitchell and Slater had both been inspired by the work of David Keilin (1887–1963) on the respiratory chain† when they were

* See Morris (2003) for an extended review of these and many other related issues. † In Keilin’s picture, the respiratory chain is a series of processes involving hemoproteins (cytochromes) and electron transfers that enable oxygen to be consumed to generate energy.

132

PETER MITCHELL

Figure 16 A representation of the adenosine triphosphate molecule, ATP. The three groups to the left are the phosphate groups; the nucleotide adenosine is to the right. The enzyme ATP synthase hydrolyses the bond with the third phosphate to create adenosine diphosphate (ADP) and a large amount of free energy, a reaction that is at the heart of all living processes. (Source: Wikipedia.)

at Cambridge, their respective researches subsequently took such dramatically different directions. Thus, Mitchell ignored the minutiae of the voluminous biochemical data, and concentrated instead on the thermodynamics of the problem in ways that surely Max Planck would have approved, whereas Slater took a strictly biochemical approach that advocated a search for intermediates. Mitchell’s starting point stemmed from his consideration of the simple but profound question: how can scalar chemical forces drive vectorial membrane transport? They cannot, of course. He concluded, therefore, that there must be another way. Against this background Mitchell published in 1961 what he called the chemiosmotic hypothesis, although strictly speaking his use of the term “osmosis” departs somewhat from the conventional (Mitchell 1961). Nevertheless, the name has stuck. He denounced the idea that high-energy chemical intermediates played any role in ATP synthesis, and replaced it with a coherent hypothesis that related electron transfer in the respiratory chain with the transmembrane transport of protons in the opposite direction (creating a pH gradient and electric field across the mitochondrial membrane), thereby establishing a closed circuit, so to speak, that conserved the total energy. The interplay between these two electrically dominated processes drives the ATP synthesis cycle utilizing the huge enzyme ATP synthase (we now know that its molecular weight is roughly 1000 times that of ATP), which must be in constant use to ensure that the cell’s energy demands are met and that the cell stays alive. It was about as left-field a suggestion as possible, especially as there was at that time little or no data to support it. Worse still, it contained almost no classical biochemistry: a physicist could even have proposed it! As the charismatic Austrian-American biochemist Efraim Racker (1913–1991) put it (Racker 1975):

PETER MITCHELL

133

Given the general attitude of the establishment these formulations sounded like the pronouncements of a court jester, or of a prophet of doom. Experimentally, these alternatives sounded even less attractive than the elusive chemical intermediates. By 1963, I concluded at a symposium in Atlantic City than anyone who was not thoroughly confused simply did not understand the situation.

Racker himself worked hard to provide the experimental evidence needed to support Mitchell’s hypothesis, and indeed, many thought that he should have shared in Mitchell’s Nobel Prize when it was awarded in 1978. Leslie Orgel, a British theoretical chemist who moved to the Salk Institute in San Diego in 1964, later (Orgel 1999) described it all succinctly and elegantly by posing the rhetorical question, “Are you serious, Dr. Mitchell?” and commenting: Not since Darwin and Wallace has biology come up with an idea as counterintuitive as those of, say, Einstein, Heisenberg and Schrödinger.

Mitchell had another problem to contend with. Slater’s classical biochemistry proposals stimulated large numbers of scientists, particularly in North America (recall the situation at the close of the nineteenth century when German scientists thought electrons were uncharged [see Chapter 4]), into dedicating themselves and large amounts of research funds to searching for the proposed high-energy intermediates. Not surprisingly, as so often happens with technology driven-initiatives, the search developed a momentum of its own. It therefore became increasingly difficult to stop, and for its protagonists to admit that the biochemical pathway did not exist. The very existence of these searches also meant that few people accepted Mitchell’s theory, especially as it threatened many careers, and the pressures on him, which his somewhat irascible manner tended to amplify, took their toll on his health. The so-called “ox phos wars” had begun. His doctors told him in 1962 that, if he did not change his lifestyle, they would need to operate and remove threequarters of his stomach. He wisely chose to keep his stomach and resigned his academic post. However, his departure from the research scene proved only temporary. Luckily, Mitchell was a member of the family that controlled Britain’s biggest building contractor—Wimpey— and although his share was not large, he was rather wealthy. He found a derelict mansion and farm in remote Cornwall—Glynn House—but very conveniently located only a mile from a main-line railway station with fast trains to London. He then spent the following 2 years renovating it at his own expense, supervising and doing some of the work himself. After 2 years, he had converted it into a habitable home and, with Jenifer Moyle’s help, into a small but well-equipped research laboratory— the Glynn Research Institute—“a quiet haven for untrammeled scientific work and thought,” funded by an endowment of some £240,000 (some £1.8 million in 2013 money or ∼$2.6 million) provided by himself and his brother Christopher. During these conversions, Mitchell became involved in farming the

134

PETER MITCHELL

land, and as his biographer Bill Slater records, “milking eight cows by hand, morning and evening, for several months, which he claimed was excellent therapy for his gastric ulcers.” It was not a cure for his condition, of course, but it was enough to allow him to continue his research. At Glynn, he and Moyle gathered data to support the chemiosmotic theory and, in 1966, Mitchell privately published a comprehensively expanded version of his 1961 paper that he hoped would convince his many critics. But the ox phos wars had taken another turn. In 1964, the American biochemist Paul Boyer published his so-called “conformational theory” as a contribution to the problem. His central thesis was that the energy necessary for ATP synthesis came from conformation or shape changes of ATP synthase, the enzyme that actually performs the synthesis. As it was a biochemical solution, many researchers rallied to its banner especially after Slater in 1974, mindful of the continued failure to find the high-energy intermediates, finally declared his biochemical theory dead. See Figure 17 for a photograph of Mitchell taken in 1974. But controversy and the hectic acrimonious correspondence between the main proponents persisted. Its resolution began when Efraim Racker perceptively suggested that each of the combatants should write a review of their positions for publication in a joint paper. As Prebble (2002) put it:

Figure 17 Peter Mitchell photographed at his Ninth CIBA (now called Novartis) Medal Lecture awarded by the British Biochemical Society in 1974. (Reproduced with permission from Peter Mitchell, 1976, Biochem. Soc. Trans., 4, 14 399, © the Biochemical Society.)

PETER MITCHELL

135

Although the European bioenergeticists claimed not to have the same funding problems as the Americans, they still recognized a need to resolve the chaos that had now engulfed the field. Inevitably, few shared Racker’s view that a formal statement on oxidative phosphorylation could be agreed.

It was published by Boyer et al. in 1977; and as Prebble notes, the joint paper begins with an Introduction presenting an agreed statement of the problems; but, before Racker’s initiative, such a statement had never been attempted. Racker was merely reminding them of what was at stake, and that to “jaw-jaw is always better than to war-war,” as Winston Churchill once said, or it is always better to talk. Nevertheless, the wars continued for some years, but Racker’s inspired intervention marked the beginning of the end. Racker now joined the Mitchell camp, and more importantly, data supporting Mitchell’s counterintuitive solution slowly began to carry the day. But amazingly, Racker tried to assuage the feelings of the war’s losers by trying to produce an agreed and “signed” statement that included many of the conflicting positions (Prebble and Weber 2003, p. 199). He seemed therefore to be implying that science was democratic and that, if a sufficient number of senior scientists signed the statement, all would be well. But science is not democratic, of course, and the very independent– minded Mitchell refused to sign. Then out of the blue, Mitchell was astonished to hear in 1978 that he had won the Nobel Prize in Chemistry. As the Royal Swedish Academy of Sciences put it, the award “was for your contributions to the understanding of biological energy transfer through the formulation of the chemiosmotic theory.” The Nobel Committee seemed to be signaling that some specific aspects of his theory were still awaiting experimental confirmation, and indeed some doubting scientists thought the award was premature and voiced their concern that the Nobel Committee would have egg on its collective faces when the chemiosmotic theory was demonstrated to be untrue. But Nature is the ultimate arbiter, and experimental data eventually confirmed the theory, although some details had to be revised. I should add, as an aside, that Racker’s intervention rang bells for me, although on a much smaller stage. In the late 1960s, I was an experimental high-energy physicist working at the Daresbury Laboratory in Cheshire when I started to collaborate with a distinguished group from the Universities of Pisa and Rome to measure the proton’s axial vector form factor, a measurement that had not been made before because of its difficulty. Unfortunately, my new role became a casualty of laboratory politics, and I had to give way to a more experienced researcher. Two years later, however, the collaboration was on the verge of collapse, and no progress had been made toward preparing the required formal proposal. The lab director, Alex Merrison, another staunch advocate of scientific freedom, asked me if I would resume my previous duties, which, of course, I was delighted to do. It seems that the reason for the

136

PETER MITCHELL

difficulties was that the Italian theoreticians could not agree with their British counterparts on what we were actually trying to measure. My first task, therefore, was to break the stalemate, so I asked when the groups had last met, only to discover that they had corresponded only in writing! Within a week or so, we arranged a meeting with all concerned at the University of Trieste; and the problem was resolved 10 minutes after it began. One of the Italian theorists had opened the proceedings with a preamble, only to be challenged immediately by the Brits. Then a beautiful example of a collective “Eureka!” moment followed as we realized that the theorists were saying the same things but using different scientific notations! The way was therefore opened to what became almost a decade of fruitful collaboration. However, to return to more general considerations, it should be astonishing that virtually every funding agency nowadays evaluates almost all proposals using only written correspondence. Face-to-face meetings are rare, thereby creating boundless opportunities for misunderstanding. The ox-phos wars did not end immediately, but this was partly due to Mitchell’s failure to acknowledge that some details of his chemiosmotic mechanism might be wrong. In addition, his deep suspicion of possible active indirect roles for parts of large proteins, perhaps deriving from his exclusive concentration on small molecules when he started out, also seem to have blinded him to the significance of the effects that protein shape changes can induce. Boyer’s conformational proposals, however, while being unable to explain ATP synthesis directly, later proved invaluable in explaining the wondrous mechanisms by which ATP synthase operates; and they won him a share in the Nobel Prize in Chemistry in 1997. My use of the word “wondrous” can easily be defended. Every one of us, even when sleeping, generates on average, kilogram for kilogram, some 10,000 times more energy than the Sun (Lane 2005, p. 67). The Sun’s nuclear reactions only manage some 0.2 milliwatts per kilogram of solar matter while our metabolism, fuelled by the workaholic ATP synthase enzyme, produces some 2 watts per kilogram of Homo sapiens! Rotating at some 150 revolutions per second, these amazing enzymes embedded in cellular membranes produce for each one of us roughly our body weight of the ATP molecule each day. ATP synthase is one of the most remarkable molecular machines ever made, and Nature achieved that billions of years ago. Mitchell’s discovery was one of the most important of the twentieth century, stimulating new ways of thinking in experimental and theoretical biochemistry, and later also in nanotechnology. I knew him well for a few years. He was a very complex person and a passionate advocate for individual freedom. I can attest that it would probably have been impossible to persuade him in 1955 to consider trying to justify his work to funding agencies in terms of its possible potential economic or social impact. The very idea of patenting for profit was anathema to him, and he made clear to any who asked that he worked solely for the benefit of humanity. Mitchell retired from the post of director at Glynn in 1985 and invited Peter Rich from the University of Cambridge to succeed

PETER MITCHELL

137

Figure 18 Photograph of the research staff at the Glynn Research Institute in 1996. Peter Mitchell is at the center with Peter Rich on his left. (Reproduced by permission of Peter Rich in his capacity as the Chair of the former Glynn Research Foundation.)

him. Mitchell then devoted himself to fundraising apparently with very limited success. Rich was then a participant in my Venture Research scheme (see Chapter 12) with funding until 1987, which we were able to transfer to Glynn. Following lengthy discussions with Mitchell, we were on the threshold of supporting him at Glynn through Venture Research, but to my intense regret I was unable to persuade him to accept the very weak contractual conditions BP’s funding of the scheme would impose. More than 200 participating academics had happily accepted those conditions; and as we proved over and again, they allowed total scientific freedom. Glynn was closed in 1996—see the photograph taken a little earlier of the Glynn group (Figure 18). Peter Rich moved to the University College London to set up the Glynn Laboratory of Bioenergetics; he continues to work there today. If today’s funding rules had been operating in 1955, the almost universal hostility to Mitchell’s “outsider’s interest” might have stopped any further development of his ideas. On the flip side of the same coin, in 1978, Mitchell had become widely regarded as the expert on cellular energy generation; and knowledge of his well-known disapproval of Boyer’s conformational ideas might have persuaded funding agencies that these ideas would not prove good value for money. These two examples alone could hardly provide better

138

PETER MITCHELL

supporting evidence for Feynman’s perceptive definition recalled in Chapter 1, “Science is the belief in the ignorance of experts.” One might expect that Mitchell’s success with his ideas on vectorial chemistry would have persuaded chemists in general of the importance of that approach—it has not. One of the most successful Venture Research programs (see Chapter 12) of the 1980s was a collaboration between physicists at the University of Texas at Austin (Harry Swinney, et al.) and chemists from the University of Bordeaux (Patrick DeKepper, et al.) to study ways of achieving multidimensional chemistry in the laboratory (Braben 2008, p. 167). Classically, when studying the interaction of two chemicals, say, one puts them in a test tube and stirs them thoroughly so that they are always in intimate contact with each other. Indeed, virtually all chemistry, academic or industrial, is done in what is effectively a zero-dimensional scalar environment. Bearing in mind that a scientist’s main objective is to study what Nature does and thereby to strive to understand her, one can hardly fail to note that Nature never seems to engage in well-stirred scalar chemistry. No living system is well stirred, nor are the oceans, the atmosphere, or indeed the universe. There is structure everywhere we look. Chemistry, as practiced by Nature, is always performed at a highly specific time and location. The Swinney and DeKepper collaboration succeeded in developing the first laboratory chemical reactors to yield sustained spatial patterns—an essential precursor for the study of multidimensional chemistry—but since BP ended Venture Research funding in 1993, new sources of funding have not been forthcoming, and the group’s interests have moved on. The problem remains a challenge.

10 Harry Kroto: An Artistic and Adventurous Chemist with a Flair for Astrophysics

Chemistry is the oldest of the sciences. For much of recorded history it was known by its Arabic name of “alchemy,” of course; and it was not until about the eighteenth century that “chemistry” became the norm. Chemists would have had little difficulty demonstrating a potential for economic and other social benefits because such objectives were the sine qua non of what they did. They were very successful in this, and made many advances in such fields as ore extraction, metal refining, brewing, dyes, cosmetics, and medicines, motivated only by prospects of increased fame and riches, and the possibility that they might ease the lives of their fellows. Indeed, at least until the seventeenth century, practical work tended to dominate at the expense of the advancement of science in general, but chemists rarely had to deal with the restrictions on their creativity imposed by the prevailing dogma. Scientists from other disciplines were not so lucky, of course, as illustrated by the well-known struggles of Copernicus, Kepler, and Galileo. Many of Isaac Newton’s struggles, on the other hand, seem to have been self-imposed. In addition to his prolific contributions to physics—in mechanics, optics, and mathematics—Newton also had a long career as a chemist. Astonishingly, he devoted more than 30 years searching for a Holy alchemical Grail that the most desperate modern scientist would scarcely contemplate for a moment, the outcome of which seems to be devoid of any scientific value. John Maynard Keynes, the exceptionally creative twentieth century British economist, was a student of Newton’s works in general and his voluminous Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

139

140

HARRY KROTO

handwritten alchemical manuscripts in particular (Newton wrote at least a million words on the subject, none of which was published). He had also assembled a vast library of virtually every book on alchemy since Aristotle. Keynes said that Newton was (Christianson 1984, p. 205): The last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago.

One might wonder why such a consummately accomplished scientist should waste his time so wantonly. Indeed, it is still a mystery. Newton was an exceptionally complicated person. He was a devoutly religious Unitarian (how ironic that he joined Trinity College at Cambridge!), writing even more words on religion than he did on alchemy. The longprevailing scientific dogma of the time was that every substance, animate or inanimate, was composed of just four elements—earth, air, fire, and water. Indeed, the received wisdom was that if lead and gold, say, both consisted of these four elements, “why may not the dull and common metal have the proportions of its elements adjusted to those of the shining precious one?” Thus, base metals might be thought of as “unripe gold,” and the chemist’s task was merely to find ways of ripening them. Therefore, Newton began an almost endless stream of lengthy experiments with “fire and crucible,” which involved the treatment of a huge range of metals and ores for carefully controlled temperatures and times and to which he brought the full rigors of his meticulously careful scientific method. The thinking behind all this also extended to another possible substance—the philosopher’s stone—the ultimate in catalysts that could synthesize rare substances such as gold and silver or mythical substances such as the elixir of life itself. Newton devoted the major proportion of his career to the uncovering of these “Treasures of Darkness,” as his biographer Gale Christianson describes them (Christianson 1984, p. 203), but apparently without even the slightest hint of success. There was certainly no Principia Chemica to match his Principia Mathematica, and there seems to have been no rational rigorous reasoning behind his amazing aberration. As Chapter 9 illustrated, the tendency for widely accepted theories to become virtually indisputable and paralyzing dogma has not diminished. Dogmatism, once identified as such, should be easy for scientists and other searchers of the truth to challenge, largely because of its unwavering certainty. But dogmatism in its most invidious form may be unstated, as in Chapter 9, for example, when it was almost universally assumed that adenosine triphosphate synthesis must take place via chemical intermediates; there were no other possibilities. In these circumstances, its influence is subtle and all pervasive; and it requires courage, determination, and insight at their highest levels to challenge it. Scientists fostering suspicions on the quality of their emperor’s

HARRY KROTO

141

clothing are faced with the dawning and agonizing prospect that voicing those suspicions will probably place them in conflict with the most influential people in their fields and therefore with the conventional wisdom of the dominant group, or COWDUNG, as it might be described. This mischievous, apposite, almost-but-not-quite an acronym was coined by the British biologist and philosopher, Conrad Hal (Wad) Waddington (1905–1975) (Waddington 1977, p. 16). Sadly, it was published posthumously; it should be better known as it elegantly encourages courageous skeptics to give no more credence to dogma’s devoted disciples than they deserve. In the 1960s and 1970s, many of Peter Mitchell’s difficulties outlined in Chapter 9 stemmed from a refusal by chemists in general to accept that solutions to chemical problems might come from such nonchemical considerations as variations in electric-charge gradients across cell membranes. Similar examples have been given in earlier chapters, and of course there are many others. Every participant in our Venture Research program (see Chapter 12) involved novel approaches that were almost invariably in defiance of 1980s dogma in whatever field; and, not surprisingly, the appropriate national funding agencies had concluded that they were likely to fail or be irrelevant to current problems—hence their approach to us; we were always the funding agency of last resort. Nevertheless, following the intensive, open-ended, and face-to-face discussions that characterized our pseudo-self-selection procedures (see Chapter 12), we had found participants’ arguments irresistible; and we subsequently ensured that they should have the resources (and freedom) they required. Not surprisingly, their results eventually showed (at least implicitly) that the dogma had been wrong or seriously flawed, as they led to hundreds of significant publications in the peer-reviewed literature. However, these few pioneers succeeded in launching their challenges and bringing them to fruition. Many others are not so lucky. In the twenty-first century, the ubiquitous new policies, with their emphasis on consensus, economic benefit, and relevance, endow COWDUNG with more power than ever. It ought to be curtailed. Carbon has been the most studied element in chemistry. If we could take ourselves back to the time when the new funding rules increasingly began to be applied (to about 1970, say), carbon’s atomic properties were well known and could satisfactorily explain the structures of vast ranges of molecular species in which carbon plays key roles. There would still have been the expectation, of course, that in view of carbon’s seemingly endless versatility,* many other of its compounds would be found in the future beneath the multitude of unturned pebbles on science’s beach (see Poster 8). That situation is unlikely to change; but next-step, routine discoveries are also unlikely to reveal new scientific insights, although such outcomes can never be ruled out if scientists do stumble across them. However, they must be free to follow whatever line * Approximately 90% of carbon compounds are organic. They include carbohydrates, proteins, enzymes, lipids, nucleic acids, polymers, plastics, medicines, drugs, and dyes.

142

HARRY KROTO

Poster 8: Carbon Leaving aside for the moment what should be the chastening fact that despite the profusion of scientific advances made over the past century or more some 95% of the matter in the universe has still not been identified, carbon is perhaps the most extraordinary element we know. Its uniqueness stems from its atomic structure. The carbon nucleus of six protons and six neutrons (there is also a stable isotope with seven neutrons, but that does not affect the simplified argument here) is surrounded according to the usual quantum mechanical picture by a “cloud” containing six electrons. The quotation marks used here indicate word meanings that are not precisely the classical, but are near enough for our purposes. Two of those electrons are “paired,” each having an identical and opposite angular momentum to the other, in a high-energy state in close proximity (on average) to the nucleus. These electrons play only modest roles in carbon’s chemical interactions. The other four in the “electron cloud” may occupy higher angular-momentum states but are more weakly bound, and each of them or combinations of them are available to “pair” with electrons from other nearby elements or other carbon atoms thereby forming “covalent” chemical “bonds” with them. Thus, a “single” carbon bond is formed when one of its electrons is paired with an electron from another element. “Double” bonds are formed when two electrons from each element are shared, and so on up to a maximum of four electrons for carbon, making it quadrivalent. This arrangement of four relatively free electrons bound not too strongly to a relatively small nucleus means that carbon can form more long linear chains, branched chains, rings, or combinations of these structures than any other element, creating a virtually infinite number of potential compounds. DNA is, of course, a classical example. These chains or loops can often form spontaneously (rather than requiring the action of catalysts) because these structures have a net lower energy than the isolated free atoms from which they were formed.

appeals, of course. In Nature, carbon had been found in three so-called allotropic forms: graphite, diamond, and amorphous carbon such as charcoal and soot. Their physical properties varied widely, but they were all generally well understood. Details excepting, therefore, the “Book of Carbon” as it might have been called could have been presented in the post 1970 world as being well on the way to completion, and funding agencies operating under today’s rules would have been unlikely to believe that searches for substantial revisions to it should be given priority. Harry Kroto was born in Britain a few months after the outbreak of World War II. His parents were refugees who had fled Germany and the Nazis with

HARRY KROTO

143

only the possessions they could carry. Both were deemed enemy aliens, but only his father was interned (in the Isles of Man). His mother (and later his father) moved to Bolton in the North of England where Harry grew up. Not surprisingly, therefore, he soon learned that he would have to be resourceful to survive. He was educated at Bolton School, where his interests slowly converged toward chemistry. As he states in his autobiography published on the Nobel Prize Web site (The Nobel Foundation 2012): I, like almost all chemists I know, was also attracted by the smells and bangs that endowed chemistry with that slight but charismatic element of danger which is now banned from the classroom. I agree with those of us who feel that the wimpish chemistry training that schools are now forced to adopt is one possible reason that chemistry is no longer attracting as many talented and adventurous youngsters as it once did. If the decline in hands-on science education is not redressed, I doubt that we shall survive the twenty-first century.

It is a pity that such insightful remarks continue to be ignored. Nowadays any strange smell or dramatic bang that teachers might wish to call upon by way of introducing young people to science’s adventurous spirit must first survive a formal risk assessment analysis. Very few do so (see Poster 9). The young Michael Faraday discovered benzene, for example, in 1825; but for the past few decades, teaching laboratories have been virtually forbidden to allow students to experience personally something of the joy of Faraday’s discovery of this important molecule, because benzene is toxic and carcinogenic. All this

Poster 9: Big Bangs I remember a demonstration at the University of Liverpool’s physics department using lycopodium powder (finely divided pollen) when I was still a schoolboy in the 1950s. The demonstrator gave an offhand, low-key warning that we should expect a bang, but then went on to produce the mother of all bangs—a bang so loud that it almost lifted me out of my seat with astonishment. The demonstrator smiled broadly and proudly at us, as if to say, “Now THAT was a bang!” We youngsters buzzed with excitement for many minutes afterwards, some of us for the rest of our lives. Ubiquitous and uncompromising safety regulations would not allow such inspirational demonstrations today. Thus, for example, workers at flour mills, coal-fired power stations, or any other places where finely divided dust can gather accidentally might not fully understand the reasons to be careful and, consequently, might be unsuspecting of the devastating force that full-scale explosions can have. The new rules do not forbid presentations using such powders as lycopodium, but those that are allowed only produce gentle “pops” accompanied by small bursts of pretty flame—all very forgettable.

144

HARRY KROTO H H

C

H

C

C

C

C

= H

C

H

H

Figure 19 These two diagrams indicate the iconic molecular structure of the benzene molecule C6H6. It shows its six carbon atoms arranged in a ring held together with single and double bonds. The two representations above are used interchangeably.

may be very sensible but what risks does society take by immersing youngsters in a culture that requires them never to do anything without first carefully balancing the risks and ensuring that they are acceptable according to some imposed formula? What effects might these dogmatic policies have on spontaneity or joie de vivre of entire generations? How might they affect such traits as resourcefulness, determination, and youthful bloody-mindedness on which progress has always depended? Fortunately, however, Kroto was not deprived of his inspirational smells and bangs, and went on to study chemistry at the University of Sheffield and to earn a PhD. Benzene’s molecular structure was not discovered until 1865 (see Figure 19). As the story goes, the German chemist August Kekulé had perhaps the most famous dream in science in which he imagined a circle of six snakes each swallowing the tail of the snake in front, and went on to propose the seminal ring structure now universally accepted and of key importance for organic chemistry. But it seems that the dream was probably a fabrication as Kekulé had seen a proposal for benzene’s structure published some 4 years earlier by the Austrian chemist Josef Loschmidt (1821–1895) who was then a schoolteacher.* Kekulé, a professor at the University of Ghent (he was shortly to move to Bonn), had presented his structure in a paper to the French Academy in Paris in 1865 to great acclaim. Years later, his former student and successor as professor of chemistry at Bonn, Richard Anschütz, who later also became his biographer, read the Loschmidt paper and asked himself the obvious question about whether Kekulé had been aware of it before he made his claim. At first, Anschütz was adamant that he had not, but after some fascinating detective work he had to conclude that Kekulé had seen it. It seems that the turning point came when Anschütz heard a colleague’s comment that “dreams do not usually come with footnotes and literature citations.” * See Noe and Bader (1993) for a full description of this unedifying episode and their determination to honor the “father of molecular modeling.”

HARRY KROTO

145

However, Kekulé’s claim has not only stood mainly unchallenged but also resulted in his being covered in honors. The German Chemical Society celebrated the 25th anniversary of his “discovery” in 1890 at “the Benzolfest”; but much more was to come. In 1895, the German Kaiser Wilhelm II ennobled him, allowing him to add the title “von Stradonitz” to his name. Loschmidt, a shy and self-effacing man, never defended his priority. He obtained a junior post at the University of Vienna in 1868 and went on to flourish there, eventually becoming Dean of the faculty of science. Loschmidt had the good fortune to become friends with two of the nineteenth century’s most eminent physicists—Josef Stefan and Ludwig Boltzmann—whom he also taught. Another of his remarkable discoveries was to accurately estimate in 1865 the actual size of a typical molecule. Boltzmann admired the depth and profundity of his insight in general, and indeed Loschmidt had helped him with his own studies on entropy. It is no wonder that Boltzmann commented in his obituary: Loschmidt’s work forms a mighty cornerstone which will be visible as long as science exists.

Coming from Boltzmann, that is praise indeed, and a remarkable tribute to a remarkable man who, sadly, history has almost forgotten. To put Loschmidt’s discovery and Boltzmann’s comment into context, recall that in 1865 few scientists accepted the very idea that atoms and molecules were real entities rather than merely being fictitious constructs. Even at the end of the century, the great Max Planck was still not convinced. For this discovery alone, therefore, Boltzmann’s praise is fully justified. Kekulé was not the first scientist to behave in this shabby way, and he will probably not be the last. Following an urge to travel, Kroto accepted an invitation to work with Don Ramsey at the National Research Council in Ottawa. The NRC under E. W. R. Steacie (1901–1962) had created an environment in which scientists were free to work on any problem that interested them, and it had generally cultivated an encouraging atmosphere in which young people could thrive. Kroto chose microwave spectroscopy, studying such embryo chain molecules as NCN3, and honing his quantum mechanical skills in interpreting the results. These choices were entirely Kroto’s, of course, selected according to what he thought were his personal skills or lack of them and how they should be refined and extended to prepare him for the way ahead. Nowadays, such decisions are likely to be made by a “School Director” or some such panjandrum, that is, a person seeking the optimal development of a department’s human and physical resources in terms of such measures (metrics is the fashionable word) as winning grants or moving up league tables. However, 2 years later Kroto took another postdoc job at the Bell Labs in New York and studied laser Raman spectroscopy, a technique pioneered by the Indian physicist, C. V. Raman

146

HARRY KROTO

(1888–1970). Target molecules are bombarded with light, and the scattered radiation is detected and measured to yield data on the vibrational and rotational modes of the excited targets. Raman won the Nobel Prize for Physics in 1930 for this discovery. Thus, suitably primed in the ways of research, Kroto was now ready to embark on an independent scientific career and the joys of exploring a vast unknown universe. In 1967, he moved to the University of Sussex. Founded in 1961, it was then a new university with an intentionally pioneering structure. Instead of the traditional disciplinary departments, there were “Schools of Study,” the design of which were intended to promote high quality teaching and research. Kroto was offered and accepted a permanent post as Lecturer in the School of Chemistry and Molecular Sciences in 1967. At that time, appointed academic staff automatically received modest contributions to their everyday research running costs from the government via a departmental allocation. In addition, students had quota awards, and they could choose with whom they wished to work. Thus, Kroto was free to choose his future path without having to submit proposals for the approval of anonymous committees, a fate that few of his students today can escape. Spectroscopists had turned to the heavens long before the radio astronomers and had discovered significant quantities of such molecules as CH and CN in interstellar space. In 1968, Charles Townes, extending his “playful” pursuits into the growing field of radio astronomy after the discovery of the maser–laser (see Chapter 7), had detected “his old friend” NH3 in the Orion Nebula, and water vapor and formaldehyde H2CO, in the following year. To add to this, in 1970 Wilson, Jefferts, and Penzias surprisingly discovered very strong signals from carbon monoxide (CO) (Wilson et al. 1970). One might have expected naively that molecular hydrogen would be the source of space’s most important microwave radiation, but simple quantum mechanical calculations show that symmetrical molecules such as hydrogen cannot undergo rotational molecular transitions. Consequently, the lightest, most-abundant asymmetrical molecule in space is CO. Its abundance is ∼1% of H2, and hence it emits the strongest microwaves. Further surprises followed. In 1975, Zuckerman published news of the discovery of large quantities of ethyl alcohol, the drinker’s favorite, in the Galactic Center, humorously remarking (Zuckerman et al. 1975): that this truly astronomical source of ethyl alcohol . . . would yield 1028 fifths at 200 proof.

This is a gargantuan quantity, roughly corresponding to 10,000 planet Earth volumes full of absolute alcohol. For the whimsically minded, such a quantity indicates that the center of the Galaxy might be a no-go area for prohibitionists, as gravitational potential energy there is as low as it can get, so they would have nowhere to dump the confiscated liquor!

HARRY KROTO

147

Kroto, his Sussex colleague David Walton, and Anthony Alexander started experimenting with new ways of synthesizing carbon chains and testing their quantum-mechanical prowess by trying to understand the rotational and bending modes of the molecules they produced. But they needed a dedicated microwave spectrometer, which luckily the Science Research Council agreed to provide. However, with the impenetrable logic often used by bureaucratic organizations, they awarded custody of the instrument to his theoretician collaborator at the University of Reading, who was located some hours away by any means of transport except helicopter. Thus Kroto and his fellow experimentalists had to tramp to Reading every few weeks until he could persuade his patrons some years later to provide an instrument where it was actually needed, as the research council did in 1974. Inefficiencies excepting, however, the council did actually deliver the goods; but nowadays, he may not be so lucky as he would be required to provide evidence of the potential economic or social “impact” the instrument might be used to create. Since he was proposing to simulate in the laboratory the conditions thought to prevail in red-giant stars that might be responsible for synthesizing the long carbon chains detected by radio astronomy, his chances today might not be very high. Kroto and his colleagues had successfully synthesized HC5N and measured its spectra. According to then-current understanding, it was predicted that molecules of this length would only rarely be found in the interstellar media (Longair 2006). Indeed, Kroto had made a rough estimate on the basis of previous molecules observed that each additional carbon atom in a chain should reduce the probability of its formation by approximately a factor of ten. Nevertheless, as any scientist knows, observations always trump theoretical predictions; so, undeterred, he and his radio astronomy collaborators from the National Research Council in Canada looked for and unexpectedly found the HC5N chain in the giant molecular cloud, Sagittarius B2, near the Galactic center. Playing their luck, they repeated the same procedures with HC7N and went on to find that molecule in a dark cloud in Taurus, which, of course, was expected to be present in very much lower concentrations than the C5 molecule. In 1976, impossible as it may seem, they discovered HC9N, then the longest chain molecule yet detected. Clearly they were witnessing some new type of process, but what? Before long, these chains also began to be observed in the vicinity of cool red giant carbon-rich stars such as the catchily named IRC+10216, and a possible route to synthesis began to appear. Following a visit a few years later to Kroto’s lab in Sussex, Robert Curl suggested that Kroto should come and see what Curl and his colleagues were doing at Rice University in Texas. Curl was born in Alice, Texas, in 1933, the son of a Methodist minister. When he was 9 years old, his parents gave him a chemistry set. Within a week, he had decided to become a chemist and, according to his Nobel autobiography, never wavered from that choice. Obviously, the set he was given was not one of the sterile excitement– bangs- and

148

HARRY KROTO

smells-free, apologies-for-chemistry versions that are the only ones a wellwishing relative can buy today. When the time came for Curl to choose a college, he opted for what was then the Rice Institute partly because at that time Rice did not charge tuition fees; his parents would have been hard pressed to send him to a university that did. While his father held the highest administrative office (not counting the Bishop) in the Southwest Texas Conference, he did not make much money. Curl’s most impressive teacher there was Richard Turner whose enthusiastic discussion of the quantum-mechanical rules that determine which rotational modes of excited molecules are allowed, and the pioneering work of Kenneth Pitzer (1914–1997) in that field, made him want to go to University of California at Berkeley to work with him. When he was still a very young graduate student at Caltech, the ultra-independently minded Pitzer had in 1935 the audacity to consider a long-standing problem arising from the lowtemperature heat capacity of ethane, C2H6 (ethane was another of Michael Faraday’s amazing discoveries), an important industrial product. The ethane molecule may be thought of as a rod (bond) joining the two carbon atoms, with three planar hydrogen atoms at each end that may rotate like pinwheels. Pitzer found that a quantum mechanical analysis of its rotational states explained the heat capacity data. It turned out to be a seminal discovery with far-reaching implications. As Curl noted much later in his obituary of Pitzer, the limitations on such motions have wide-ranging effects throughout chemistry and determine the energetics of the various conformations (structures) available to organic molecules (Curl and Gwinn 1990). Knowledge on the relevant conformation is vital to understanding the functions of protein enzymes, for example, and the kinetics of many chemical reactions. No wonder the young Curl was impressed! In 1958, following a postdoctoral position at Harvard, which was acquired with Pitzer’s help, he received, out of the blue, an offer to return to Rice as an assistant professor. The prospect of a warm climate and familiar surroundings full of many happy memories was irresistible, and he immediately accepted. He is still there. Arriving at Rice for the arranged visit just after Easter of 1984, Kroto found Curl enthusing about his young colleague Rick Smalley’s recent work. Richard (Rick) Smalley (1943–2005) was born in Ohio exactly a year before D-Day. When he had reached the impressionable age of 14, the Russians rocked the world with the launch of the Earth’s first artificial satellite—Sputnik 1—thereby sparking a revolution in US military and political thinking, as well as the start of the space race. However, it also had a more lasting legacy in that it persuaded Smalley that science and technology would be where the action was going to be in the coming decades and that, therefore, he should become a scientist. He was lucky in that his aunt, Dr. Sara Jane Rhoads, was one of the first women in the United States ever to be appointed a full Professor of Chemistry. She was his hero; and he used to call her, lovingly, “The Colossus

HARRY KROTO

149

of Rhoads.” Her example was a major influence in his specific career choice; and as his “soul had been imprinted by an exposure to chemistry,” he followed his aunt rather than going into physics or engineering. After graduating in chemistry from the University of Michigan in 1965, he decided to take a job in the chemical industry while he decided what he really wanted to do and to live a little in the “real world,” as he put it. Thus he accepted a post at a large polypropylene plant run by the Shell Chemical Company in New Jersey where he worked as a quality-control chemist in a high-tech laboratory. In 1969, he moved to Princeton to study for a PhD in the Department of Chemistry. In 1973, just before graduation, he accepted a postdoc position with Donald H. Levy at the University of Chicago. At that time, the final oral exam for a Princeton PhD included a defense of three original research proposals selected by the candidate. This is an excellent discipline for a young scientist to master, and it is one that encourages them to develop open and inquiring minds over broad fronts. One of the proposals he opted to defend involved a supersonic expansion to cool NO2 to such a low energy (temperature) that only a single rotational state could be populated and then to use a tunable laser to study the now greatly simplified spectrum. His choice was most prescient as this technique was to play a crucial role in his future research. In 1976, Smalley moved to Houston as an assistant professor in the chemistry department at Rice University mainly because he had heard of the beautiful laser spectroscopy that was being done there by Robert Curl. When Kroto arrived at Rice, Curl told him about Smalley’s recent results from his wonderful new machine for synthesizing complex molecules. Smalley’s group had just shown, for example, that SiC2 had a triangular rather than a linear structure. Smalley’s machine was huge—it was over 12 foot high (as Smalley said, “we do things big in Texas”) and ingenious. It had several stages, the accomplishment of each of which was a technological tour de force: •





An intense pulsed laser bombarded a target to produce very high temperatures—3000K or more—and ejected clouds of highly excited reaction products from the target. Laser pulses could be precisely time correlated with respect to bursts of helium gas through the apparatus. The bursts cooled the reaction products to sufficiently low temperatures to allow the formation of molecular clusters or chains, and could sweep them down a column with variable transit times (in order to allow time for products to stabilize) toward an expansion nozzle. Downstream from the nozzle, the products expanded supersonically into a vacuum, an expansion that supercooled them to a few degrees Kelvin. The slowly moving clusters could then be analyzed according to their time of flight down a spectrometer and their spectra studied.

150

HARRY KROTO

On hearing about all this, Kroto wondered if Smalley’s machine might be used to simulate the environments found near red giant, carbon-rich stars in which the impossibly long carbon-chain molecules were being observed, and he suggested substituting a graphite target for the metal one currently being used. However, for reasons that are now obscure, it was some 17 months before that simple alteration was made to Smalley’s machine. When Kroto made his suggestion, however, Smalley knew that Andy Kaldor and his group at the Exxon corporate research labs in New Jersey were already doing similar work to what Kroto was proposing. Kaldor was using an almost identical machine to Smalley’s—indeed as Smalley explained in his Nobel Prize Lecture, the kit had been designed and built at Rice for Kaldor to study carbon clusters and coke build-up in industrial catalysts (Smalley 1996). In 1984, Kaldor published news of the predicted carbon clusters, and many others not predicted involving up to ∼100 atoms (Rohlfing et al. 1984). Kaldor’s group had therefore actually seen evidence for the shortly-to-be-celebrated C60; but it was not very strong. Perhaps because it did not seem relevant to their agenda at the time or for whatever reasons, the Kaldor group did not give it their close attention. At this stage I should warn that Smalley has painted a very different picture from mine. It involves such thorny issues as who saw what, when they saw it, the reasons for looking, and even what the new discovery should be called. As the reader should expect, I have read extensively on the background to these discoveries, but I have to confess that I do not understand why Smalley has raised these difficulties. In what follows, I will report as objectively as possible what seems to be the course of events, and why I disagree with some of Smalley’s opinions. I must declare an interest. I have known Harry Kroto for more than 20 years, and we are good friends. We met at the University of Sussex following an introduction by another iconoclastic chemist there, Ken Seddon, who was a Venture Researcher (see Chapter 12). Seddon was then, as Kroto and his colleagues were, too, on the threshold of opening up new domains of chemistry. In the mid-1980s, Seddon had submitted a proposal to the UK’s Science and Engineering Research Council (SERC) to study the chemistry of ions in an ionic environment. Hitherto, chemistry had been almost exclusively concerned with the study of covalently bonded molecules in a molecular environment—a description that virtually covers the entire field of organic chemistry, for example. But Seddon had identified a domain that seemed to have been unexplored. Astonishingly, after the usual trial by peer preview, SERC gave the young Seddon’s proposal a gamma rating, the Council’s lowest possible, and rejected it. But Seddon knew about our Venture Research initiative, having been a junior participant in the scheme a few years before, and brought the same proposal to us. Following our usual face-to-face discussions, we could not have been more eager to fund it. But our small unit never had funds of its own, and as usual we had to convince a BP board to provide those funds. Our recommendations in these respects were almost invariably accepted, but on

HARRY KROTO

151

this occasion we ran into a solid wall of resistance that was created, we suspect, by an influential-but-unknown (to us) SERC-appointed peer who had recommended rejection of the original Seddon proposal. Apparently, having heard of Seddon’s audacious approach to us, that person was using his influence to prevent his advice from being ignored and bypassed. However, after a prolonged and bitter struggle, the BP board accepted our recommendation, and we obtained the support that Seddon needed. His Venture Research not only turned out to be scientifically very successful* (as usually happens when new questions are asked in important fields), but it also transformed the field of green chemistry,† and Seddon became the UK’s most cited chemist. Sadly, however, BP terminated its support for Venture Research about a year later. It seems that we had won the battle but had lost the war! BP honored all its existing Venture Research contracts so that Seddon could continue, and with BP help, I set up a company—Venture Research International— with the sole purpose of being the instrument into which new sponsors could place their investments. Seddon suggested that I ask Kroto to design a logo for us. Kroto had long had a keen interest in design and architecture and, indeed, had flirted with the idea of taking it up as a career if he could not make the grade in research, so he was delighted to oblige. We still use his fine logo; but despite our best efforts over more than 20 years and the mounting evidence that the radical researchers we had selected had been very successful, we have not been able to raise a penny for Venture Research. Nevertheless, Kroto has been unwavering in his support for our campaign. He has also supported another small group of us who are working to bring scientific thinking into the UK’s often logic- and evidence-free research policies (see Appendix 1, for example). Returning to my story, in August of 1985, Kroto received a phone call from Curl telling him that his proposed experiment was about to be done at Rice— would he like to come? Kroto dropped everything, of course, and the new experiments began on 1 September. They immediately saw the long chains that red giant stars, as Kroto had suspected, had been pumping into the interstellar media; but there was also a strange “interloper” (as Kroto describes it) with a mass of 60 carbon atoms, with a smaller second peak at a mass of 70 carbons, the pair being dubbed “the Lone Ranger and Tonto.” There was no question of their being ignored this time. Almost 2 weeks of intensive study now followed during which time Smalley’s machine was optimized for their production, C60’s tentative molecular structure was identified as a truncated *  The full story is told in Braben (2008). † Green chemistry, also known as sustainable chemistry, is the design of chemical products and processes that reduces or eliminates the production of hazardous substances. The use and production of these chemicals may involve reduced waste products, nontoxic environments, and improved efficiency. Green chemistry is a highly effective approach to pollution prevention because it applies innovative solutions to real-world environmental situations.

152

HARRY KROTO

Figure 20 The molecular structure of C60: Buckminsterfullerene, the first of a new allotrope of carbon to be found for over a century. The molecule’s diameter is about one nanometer. (Source: Wikipedia.)

icosahedron (see Figure 20), and a paper was prepared for submission to Nature on September 13 (Kroto et al. 1985)—an archetypal example of a supreme team effort, as Kroto describes it in his 1996 Nobel Lecture. The group, led it would seem by Kroto, immediately compared the structure with the geodesic domes of the American engineer, Richard Buckminster Fuller (1895–1983), and so the new molecules became known as buckminsterfullerenes, a name that not surprisingly soon became truncated to “the fullerenes.” The discovery of C60 led to the creation of an entirely new branch of carbon chemistry. The fullerenes, of which C60 is the smallest, are composed entirely of carbon in varying combinations of hexagon and pentagon structures. They form spontaneously without the need for catalysts. They have a wide range of atomic weights and create carbon-cage structures that can be open or closed but are large enough to play host to other chemical molecules. See Figure 21 for a photo of the group taken in 1985. Indeed, the Nobel Prize for Chemistry was awarded in 1996 to Curl, Kroto, and Smalley not for C60 alone, but “for their discovery of the fullerenes.” As Sean O’Brien, a young scientist in Smalley’s team at Rice, put it recently: Discovering the fullerenes was a stunning shock to the entire world. While searching for proof of the soccerball structure we created something which could change the face of chemistry. This was another example of finding something for which we weren’t looking.

HARRY KROTO

153

Figure 21 Photograph of the C60-discovery group taken in 1985 at Rice University. Robert Curl is standing. Kneeling are (left to right) Sean O’Brien, Rick Smalley, Harry Kroto, and Jim Heath. The ghost of a C60 molecule—a “Bucky ball”—is in the foreground. (Reproduced by kind permission of Harry Kroto.)

In 1991, the Japanese scientist Sumio Iijima discovered nanotubes, in which carbon hexagons are arranged concentrically creating effectively a onedimensional tubular structure of arbitrary length whose ends may be capped with fullerene-like pentagon structures (Iijima 1991). These discoveries have opened up new fields of chemistry, materials science, and nanotechnology with a virtually limitless range of potential applications in nearly every technological field. (See, for example, Dai [2006].) Kroto’s determination to understand the formation mechanisms of interstellar carbon chains purely for its own sake and independently of any practical application or benefit played a crucial role in these discoveries. However, while acknowledging this motivation and agreeing that it played a part, Smalley said in his Nobel Lecture, delivered in 1996 (Smalley 1996): The notion that the discovery of fullerenes came out of research into the nature of certain molecules in space is highly appealing to scientists. It is hard to think of any line of research that is less likely than interstellar chemistry to have some practical, technological impact back here on Earth. So if fullerenes turn out to lead to the technological wonders that some people (like me) believe are in our future, then perhaps one can argue that any research project can get lucky too. I have argued this way in the past, and I still believe there is some sense to it—but only a little. In fact, the fullerenes were discovered as a result of decades of research and development of methods to study first atoms, and then polyatomic molecules, and ultimately, nanometer-scale aggregates. It was well-funded

154

HARRY KROTO

research that at nearly every stage was justified by its perceived relevance to real world technological problems. To a great extent, many of these earlier bets as to the worldly significance of fundamental research actually paid off.

These remarks are consistent with the new policies followed by almost every national funding agency today, but endorsement by a Nobel Laureate gives agencies powerful arguments that may help convince politicians, for example, that their new policies are indeed sound and effective despite what critics like myself may say. I have to admit that when I first read them my heart sank with despair, as Smalley’s views are in direct disagreement with the philosophy I expound in this book. Scientists ignore data at their peril, of course, no matter how uncomfortable they may be; any viable theory must be able to accommodate all data and all objections. But data on the efficacy of science policies at the margin where major discoveries are made are hard to come by, which is why I have written this book. Major discoveries are rare, and the new policies work well enough for the mainstreams—fields that make up the great proportion of all academic research. One can point out that the development of the new policies has usually defied scientific logic at every turn, but in my experience I have not come across a funding agency that is impressed by that argument. Feynman’s killer definition—“science is the belief in the ignorance of experts”—is based on centuries of history. It should only be ignored if it can be shown to be wrong, flawed, or irrelevant. Yet funding agencies, struggling to survive the huge and increasing political pressures to demonstrate value for money are virtually forced to ignore it and allow pragmatism to triumph over reason. One can also observe that these policy developments have been strongly influenced by industry. While these policies might be defendable within industry itself—companies must be profitable to survive—they should be relaxed for academic research. A large industrial company (BP) employed me for 10 years (1980–1990); and although Venture Research was not part of BP’s mainstream operations, I was in constant contact with those who were. Thus, I was only too familiar with the fact that my industrial colleagues had to justify their existence every minute of every day in terms of a profitable return to an internal or external customer. As I mentioned in the Introduction, these rigorous policies have been introduced only over the past few decades, they are not necessarily part of the industrial landscape, and there often used to be some flexibility in their application. In the 1960s, for example, IBM began the Fellows program in which scientists were appointed for 5 years to be “wild ducks, dreamers, heretics, mavericks, gadflies, and geniuses.” Their remit was simply to “shake up the system.” Five won Nobel Prizes. GE and the Bell Labs had similar programs. However, all that seems to have been forgotten, perhaps because such programs are not compatible with the now virtually universal requirement that they should satisfy the armies of accountants and other effi-

HARRY KROTO

155

ciency monitors who will insist on quantifying progress at every predictable step. However, the new regimes are now virtually all we have. The unremitting discipline they demand cannot fail over the years to change ways of thinking; and this may explain, for example, why, as I have directly experienced, industrial scientists often find the idea of allowing academics freedom to do as they please very difficult to accept—“if I must account for my time, why shouldn’t they?”—a remark that was not only implied but often stated. BP frequently recruited senior academics with long associations with the company to fulltime industrial positions, and they always seemed to adjust very quickly to the new disciplines. One might say their minds had been prepared by extended contacts with regimented ways! Smalley was an academic when he took part in the Nobel Prize–winning discoveries, of course, but had started his career in industry and had maintained strong industrial links thereafter; and I suggest that he had allowed his industrial colleagues’ mind-sets to color his opinions. As I mentioned earlier, Kaldor’s group saw the evidence for C60 in 1984, as did Smalley. Nevertheless, for whatever reason, they did not appreciate its full significance and it was ignored. That was despite Smalley’s claim that the research “was justified by its perceived relevance to real-world technological problems.” If that was the case, why was the evidence ignored? The question is especially important because the evidence was pointing to one of the greatest discoveries in chemistry in modern times, and one that could hardly fail to have strong technological implications. It is well known that luck can play a major role in scientific discovery, but if scientists are not alert enough to recognize the potential opportunity Lady Luck might be fleetingly pointing toward, it will soon pass them by, as was apparently the case on this occasion. “Being alert” in this context means having a mind that is completely open to new ideas, one that is not prisoner to external influences or agendas. Thus, over a year later, Kroto together with Heath and O’Brien, “working flat out day and night,” saw the same or similar evidence and immediately pursued it relentlessly with extraordinary vigor and passion. I suggest that this was because Kroto saw it as a new and unexplained observation in what he thought was an important scientific field. Indeed, no unconstrained freewheeling scientist would have been able to resist it. At that stage, its technological implications played no role whatever. Kroto’s funding problems did not end when the group’s momentous discovery became known. The UK’s Engineering and Physical Sciences Research Council (EPSRC) turned down many of his proposals, each of which would usually have taken months to prepare. In one 5-year period (1990–1995), he submitted seven substantial proposals on C60 and C70 chemistry: one was partially approved but the rest were rejected. I include here (with Kroto’s permission) an anonymous reviewer’s comments in full on one of these rejected proposals requesting £290,000 for support over 3 years:

156

HARRY KROTO

The Investigators have a world-class track record in the field of C60 and C70 chemistry, and the department is outstandingly well-equipped for carrying out this research. This proposal is very much along the lines that “we’ve made major contributions to this field in the recent past, and we want to continue to do so.” There is very little chemical detail in this proposal. I have to admit a fundamental antipathy to this approach. Nevertheless, the spate of unpublished results mentioned in the proposal indicates to me that the momentum is being maintained, and I think the work of the Sussex C60 group should continue to be supported by EPSRC, although I do think the investigators are probably unduly sanguine about real applications of this chemistry in the short or even medium term. The planning of this project seems quite rudimentary, and I find it quite difficult to give any real assessment of the need for two research assistants, both for 36 months.

These comments contain an apparently judicious combination of heavy criticism balanced by faint praise. Beleaguered funding committees nowadays often find themselves in situations where they might have received 100 proposals like Kroto’s but have the resources to support only 25, which proportion of allowed successes is quite typical. Thus, committees are more often than not desperately in search of evidence they can reasonably use to reject a particular proposal. Comments such as the above fine example are ideal, therefore, as they allow a committee to select from them à la carte, depending on their progress in meeting their quota for rejection. Another review on a Kroto proposal said: The subcommittee found the proposals too open ended and was not impressed by the details of the case compared to other applications it had to consider.

But the more open-ended a proposal is, the more likely it is to lead to unpredictable discoveries! Kroto’s funding problems eased somewhat after the award of a knighthood in 1996 and a Nobel Prize a little later the same year, as those who follow the ways of the UK establishment will readily understand. However, the state of the UK’s general support for academic chemistry had long been one of Kroto’s serious concerns. His own department at the University of Sussex had been scheduled for closure until he threatened to return his honorary degree if the university went ahead with this plan. The situation elsewhere was even worse. Between the early 1990s and 2004 approximately 30 UK university chemistry departments were closed, including those at well-known universities such as King’s College London.* In February, 2004, Kroto wrote in The Telegraph:

* In 2011, the chemistry department at King’s College London was reopened. A number of other universities have also either reopened their previously closed chemistry departments or announced plans to do so.

HARRY KROTO

157

Now, however, the infrastructure is crumbling. The fundamentally flawed research assessment exercise has become a pretext for a slash-and-burn policy in the university science heartland rather than a golden opportunity to improve our science base.

My criticisms of the egregious research assessment exercises were outlined in the Introduction, but Kroto’s voice carries much more weight, as he is so well known. In Poster 1, I mentioned the Royal Institution’s failure to offer Kroto the widely expected appointment as Director, to the astonishment of many of his friends. Perhaps he had finally crossed one too many UK establishment figures? We will never know, of course. It may not be surprising, therefore, that in 2004 Kroto left Sussex and the UK to take up an appointment at Florida State University, where he remains today.

11 John Mattick: A Prominent Critic of Dogma and a Pioneer of the Idea That Genomes Contain Hidden Sources of Regulation

In previous chapters I have outlined some of the last century’s most revolutionary scientific ideas. The scientific community usually would have taken years to accept that the new sciences underpinning them were more correct than anything that had gone before, but almost all have now been assimilated into the lexicons of established thinking. Nevertheless, these transitions are rarely accomplished painlessly and, indeed, some damaged egos may still be smarting. We must expect these downsides. Revolutionary ideas always upset vested interests—those well-connected and influential opponents who will do their utmost to delay or prevent their acceptance as Max Planck (to name but one perceptive and outspoken searcher after truth) famously pointed out (see Chapter 3). But such ideas, once properly formulated and justified, are usually powerful enough to eventually overcome all opposition, and we should all be thankful. Apart from vastly expanding scientific horizons and intellectual enrichment in general, their technological spin-offs have transformed almost every aspect of everyday life for a large proportion of the global population. Revolutionary scientific ideas are extremely good for humanity’s collective well-being, therefore, which means that we should take every possible step to ensure that the mavericks who create them are not excluded as society’s social and political structures evolve. This is exceptionally difficult, as I will discuss further in Chapter 12, because nobody can possibly know which mavericks and which ideas will be important in the future.

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

158

JOHN MATTICK

159

In this the last of the essentially “scientific” chapters, I have come to a point of departure in that unlike the narratives of previous chapters I will outline a possible revolution that is still in progress, which means, of course, that no one can say at present where it might lead. It focuses on the Australian scientist John Mattick’s courageous attempt to shed more light on an area that he sees as largely shrouded in darkness, although even that perspective is open to challenge and interpretation. His putative revolution concerns evolution and genetics at their most basic molecular levels; but, even if it fails, credible answers to the original and wide-ranging questions he is posing could radically transform the ways we think about biology, and the outcomes alone would be sufficient justification for his crusade. However, serious students of these seminal subjects will quickly find that they are heading toward one of science’s Great Frontiers, those formidable gateways to intellectual wilderness. Exploration there is not for the faint-hearted, and funding agencies should be doing their utmost to encourage challenging initiatives, especially the iconoclastic. But such consensus-defying actions would not be consistent with current policies. This is a very serious problem. From scientists’ perspectives, it means that having sufficient courage and insight to contemplate these dangerous careerthreatening excursions is no longer enough. Nowadays, they must also have the bureaucratic skills to convince the agencies’ optimizing apparatchiks that their forays into the great unknown will be specifically beneficial or they will not even be considered for funding. We—that is scientists, pressure groups, and anyone who thinks they might have influence—must strive, therefore, to make current policies more tolerant of mavericks. Unless we do, science’s great frontiers will be no-go areas, opportunities for new science will be lost, and, consequently, future prosperity will be in jeopardy. Few subjects generate more controversy than evolution and genetics, and not only among scientists. Charles Darwin (1809–1882) began formulating his now-famous ideas on evolution soon after returning in 1836 from his epic round-the-world voyage aboard HMS Beagle. However, for whatever reasons— including the possibility that scientists might be critical of them and him, and the risks of offending the religious establishment and particularly his devout Unitarian wife—he did not publish for more than 20 years, and only then after a young and almost unknown British naturalist Alfred Wallace (1823–1913) had written to tell him about his own very similar ideas. In a remarkable gesture of scholarly friendship, the famous and well-connected Darwin then generously suggested that they submit a joint paper to the Linnaean Society. Wallace accepted, of course, and their paper was published in 1858. Today, however, the very question of whether Darwin’s prolonged procrastination could be interpreted as delaying is even a subject for debate. In 2007, John van Wyhe from the University of Cambridge published an extensive study in which he concluded that Darwin did not “delay”—van Wyhe briefly debates the word’s meaning—he merely declined to publish his ideas until he felt he had done enough work on them (van Wyhe 2007). Other experts are not

160

JOHN MATTICK

convinced. They include David Kohn, the editor of the American Museum of Natural History’s Darwin Digital Library of Evolution, who points out that in his later works Darwin explicitly states that he delayed until he was convinced that the climate was right. In a Nature News Item (Odling-Smee 2007, p. 478), Kohn says: It seems likely, therefore, that he would have been aware of the controversy his theories would cause from the outset, and probably avoided discussing humans in Origin of Species for this reason.

I agree with Kohn. My previous chapters touching upon evolution and genetics contain many examples of scientists whose ideas, whatever their intentions, pitted them against powerful vested interests and who suffered as a result. Oswald Avery (see Chapter 5) showed that deoxyribonucleic acid (DNA) is the genetic molecule, thus flying in the face of the virtually universal assumption that proteins played that role. It was, therefore, one of the most important and seminal discoveries ever made in biology. But he was not awarded a Nobel Prize, which such a revolutionary discovery surely deserved, nor was the validity of what he had done generally accepted in his lifetime. Barbara McClintock (see Chapter 6) stopped publishing on her “jumping genes” (transposons) for many years because her work was being so poorly received and she did not want to “add weight to the biologist’s waste basket.” And Woese’s discovery (see Chapter 8) of a new domain of life, the archaea (which organisms have also played major roles in the Earth’s planetary evolution) went unrecognized for decades. DNA, ribonucleic acid (RNA), and proteins are essential molecular components of all types of living cells—eukaryota, bacteria, and archaea. One of the main goals of molecular biology for the past 50 years has been to unravel the ways organisms handle information flows between the members of this molecular trinity that allow organisms to grow, develop the appropriate morphologies, and reproduce. However, astonishing as it may seem, the word “dogma” has almost invariably been used to describe the current state of understanding of these highly complex systems, a word that means, of course, “principles laid down by an authority as incontrovertibly true.” I do not know of a single principle in science to which such a description could accurately be applied. Nothing in science is or will ever be “incontrovertible.” The pioneering physical scientists featured in my previous chapters usually based their new perspectives on assertions, conjectures, and proposals aimed at demonstrating an increased understanding of an important aspect of the natural world, but they always offered them as axioms, never as “dogma.” They were, in effect, challenges to the scientific community to accept or refute them and to gather the appropriate experimental evidence that might support their positions. Indeed, all scientists have a continuing duty to treat knowledge for which they have not been directly responsible as hearsay, as if they were in a court of law,

161

JOHN MATTICK

DNA

RNA

DNA

PROTEIN

RNA

PROTEIN

Figure 22 These diagrams, illustrating biology’s “central dogma,” refer to information flows between DNA, RNA, and proteins. The left-hand diagram indicates all the possible information flows between members of this trinity. The right-hand diagram indicates the potential and possible flows proposed by Francis Crick in 1970. Solid lines indicate probable flows; dotted lines indicate possible flows; and absence of lines indicates undetected transfers. For example, no direct information flows have been detected between protein to DNA or to RNA. (The diagram is taken from Crick 1970.)

and to remember that the validity of any axiom, even one that currently appears unassailable, is probably only temporary. Minds must always be kept open, but even though the revolutionary ideas outlined in this book might so far have survived all scrutiny, I doubt if anyone would claim that they are incontrovertibly true. They are merely the best that can be done for the time being. The “central dogma of biology” often expressed as “DNA makes RNA makes protein” was first proposed by Francis Crick in 1958 who also restated it in 1970 (see Figure 22; Crick 1970). Proteins are, of course, essential to a cell’s metabolism and structure, but it was well known even in the 1950s that a copy of an organism’s complete DNA portfolio is contained in every one of its cells. In humans, for example, the proteins found in liver cells are different from those found in kidneys or skin; but the genomic DNA—from which the appropriate gene is selected and the proteins for which it codes are transcribed and synthesized—is identical in every case. Clearly, therefore, there must be mechanisms that control a gene’s expression that vary according to a cell’s location within the organism; it would not help functionality if liver cells, for example, made the proteins needed by kidneys. Therefore, all cells must be sensitive-to-strong, spatially dependent signals that can influence large sequences of its genome’s DNA and, for any specific gene within it, silence or inactivate its DNA for most of and sometimes for all of the time. This “differential expression” problem should therefore have led to doubts about the universality of the central dogma from the beginning. Studies using bacteria and bacterial viruses as model systems have over many years established that the information contained in a gene’s DNA, RNA, and proteins were expressed in a continuous uninterrupted stream of

162

JOHN MATTICK

instructions. Since work on genetics suggested that genes in highly complex eukaryotic organisms behave similarly to those of relatively simple prokaryotes, it was naturally assumed that bacterial gene structure would be common to all life forms. Furthermore, if gene structures were the same, then their regulatory mechanisms were probably very similar, too. Indeed, the French molecular biology pioneer and Nobel Laureate, Jacques Monod (1910–1976), famously said of the bacterium Escherichia coli: Anything found to be true of E. coli must also be true of elephants.

In 1977, however, the American geneticist Philip Sharp and the British biochemist Richard Roberts independently made the sensational discovery that eukaryotic genes—those found in plants, fungi, elephants, and ourselves—are substantially different: their coding sequences are not continuously linked as they usually are in bacteria but are divided into a mosaic of many separate and distinct segments by apparently meaningless sequences of genetic material that were later called introns.* They won the Nobel Prize for Physiology and Medicine in 1993 for this work, cited as being “for their discovery of split genes.” To illustrate the roles Sharp and Roberts envisaged for introns—if they were introduced into written prose for example, they might transform the phrase “central dogma” into: “cenxyzxyztralpqrpqrdoxyzxyzpqrgma.” As such a change would probably render it unintelligible, our eye–brain system would need to contain the necessary software for removing the apparently useless insertions if the phrase were to be read accurately. In biology, introns are removed by means of an editing process called “splicing,” as Professor Bertil Daneholt of the Nobel Assembly of the Karolinska Institute described in his Presentation Speech at their 1993 award ceremony (Daneholt 1997): Roberts and Sharp also predicted that a specific genetic mechanism is required to enable split genes to direct the synthesis of proteins and thereby to determine the properties of the cell. Researchers had known for many years that a gene contains detailed instructions on how to build a protein. This instruction is first copied from DNA to another type of nucleic acid, known as messenger RNA. Subsequently, the RNA instruction is read, and the protein is synthesized. What Roberts and Sharp were now stating was that the messenger RNA in higher organisms has to be edited. The required process, called splicing, resembles the work that a film editor performs: the unedited film is scrutinized, the superfluous parts are cut out and the remaining ones are joined to form the completed film. Messenger RNA treated in this manner contains only those parts that match the

* A word derived from intragenic regions, and which was first suggested by Harvard University’s Walter Gilbert in 1978. He also suggested the word “exon,” from expressed regions. Exons are the continuous genetic sequences required for functional protein synthesis.

JOHN MATTICK

163

gene segments. It later turned out that the same parts of the original messenger RNA are not always saved during the editing—there are choices. This implies that splicing can regulate the function of the genetic material in a previously unknown way.

Thus, when DNA is transcribed into RNA the introns are also transcribed. RNA is able to splice out or remove the apparently meaningless sections leaving only the exons. However, that was not all. It was soon found that genes’ intron-exon structures might also drive evolution by allowing new proteins to be made by reshuffling different regions of the DNA. These discoveries had been heralded, of course, by Barbara McClintock’s pioneering work on “jumping genes” or transposons outlined in Chapter 6. By the 1980s, therefore, it could be said that, while the central dogma might apply some of the time, it was about as far from being incontrovertibly true as colanders are watertight. It is surprising that the concept of a central dogma is still discussed nowadays, albeit in somewhat modified form. It would also seem that evolution or genetics is constrained by different social customs and practices from those that usually govern the rest of science, a bizarre differentiation that continues to the present day. When the draft sequencing of the human genome was triumphantly announced by US President Bill Clinton and British Prime Minister Tony Blair in 2000 after more than 10 years’ work by a large team (it was published later in a special edition of Nature; International Human Genome Sequencing Consortium 2001), Bill Clinton said: Today we are learning the language in which God created life.

Scientists too used similar language. Francis Collins, the Director of the US National Institutes of Health’s National Human Genome Research Institute, said: It is humbling for me and awe inspiring to realize that we have caught the first glimpse of our own instruction book, previously known only to God.

There is no question that the announcement marked a major and heroic milestone in the history of science, but why should a first attempt at determining a particular species’ genetic language be elevated into the social and religious stratospheres? Max Planck’s discovery of energy quantization, Charles Townes’ of the maser, Peter Mitchell’s of the routes to adenosine triphosphate synthesis and many others who made similarly revolutionary discoveries, opened up vast terrains of new scientific and technological opportunities, but did anyone claim that they had thereby uncovered knowledge, “known only to God”? Which heads of state went out of their way to publicly acclaim them? A religious person could say that in the beginning all His secrets were known only unto

164

JOHN MATTICK

the all-creating God. But surely He would view Homo sapiens’ unraveling of any secret of His universe with equanimity. What makes human genes and evolution so special? John Mattick entered this social minefield out of curiosity, an intellectual hobby as he puts it, rather than out of a determination to challenge its dogma. Born in Sydney Australia in 1950, he took a degree in biochemistry at the University of Sydney and a PhD in biochemistry at Melbourne’s Monash University in 1977. Shortly after, while working as a postdoc on mitochondrial genetics at Houston’s Baylor College of Medicine, he heard about Roberts’ and Sharps’ discovery of introns and started to think about the functions that non-protein-coding DNA and RNA could serve. During this contemplative time borrowed from his mainstream work he simply collected interesting data seeming to support the idea that non-coding sequences might possibly be hiding undiscovered regulatory systems, a hypothesis he found more interesting than the orthodox alternative. In 1988, after 6 years as a molecular biologist at Australia’s Commonwealth Scientific and Industrial Research Organisation in Sydney, he became professor of molecular biology and founding director at the University of Queensland’s Institute for Molecular Bioscience, posts that understandably committed him, as they do most academics these days, to satisfying the endless requirements of an increasingly pervasive and relentless bureaucracy, leaving him little time to think or advance his hobby. However, two sabbaticals supported by the Australian Research Council at Cambridge in 1993–1994 and Oxford in 2000 provided him with the necessary respite and ample opportunities for discussion with such inspirational scientists as Sydney Brenner, a future Nobel Laureate, and Simon Conway Morris, a pioneer in the evolution or early life on Earth. Indeed, Mattick’s sojourn at Cambridge led to his first publication on introns (Mattick 1994), which was the first to suggest that they might have functional roles. The central dogma that genetic information flows from DNA to RNA to proteins proclaims, in effect, that genes are synonymous with proteins. While that is more or less true for prokaryotes—the proportion of introns in their genomes is usually less than about 5%—it cannot be true for eukaryotes where introns play far more dominant roles—they apparently make up some 98% of the human genome for example. The discovery of introns also raised the question of what actually constitutes a gene, as it seemed to consist almost entirely of materials with unknown functions. Indeed, Barbara McClintock showing amazing prescience had had a similar thought in the 1950s. In a letter of March 12, 1950, to her scientific soul mate Marcus Rhoades (Comfort 2003, p. 151), she said: Are we letting a philosophy of the gene control our reasoning? What, then, is the philosophy of the gene? Is it a valid philosophy? It is the historical understanding of the evolution of this philosophy that is of prime importance in understanding the state that genetics has gotten into. There has been too much

JOHN MATTICK

165

acceptance of one philosophy without questioning the origin of this philosophy. When one starts to question the reasoning behind the origin of the present notion of the gene (held by most geneticists) the opportunity for questioning its validity becomes apparent.

The discovery of introns should therefore have precipitated a radical reappraisal of genetics, as it indicated (like the thirteenth chiming of a clock) that current understanding was seriously flawed. A similar situation confronted physicists at the turn of the twentieth century (see Chapter 4) but at that time scientists were generally uninhibited, and could try any line of attack they thought might be promising. The resultant spectacular harvests fully justified society’s trust in them and, in a sane world, should have established such laissez faire policies as the norm. Unfortunately, by the 1970s scientists had increasingly began to feel the full force of the new more dirigiste policies, although those working at labs with farsighted governance, such as the Cambridge Laboratory for Molecular Biology and Cold Spring Harbor, for example, continued to enjoy more tolerant regimes until much later. As the new policies specifically favored targeted, prioritized objectives selected by consensus and discouraged the open-ended searches for new perspectives that had earlier proved so successful, we should have expected that the quest for comprehension would suffer as a result. To return to my prose analogy; I introduced sequences such as xyz and pqr to illustrate how introns might make their appearance in an organism’s DNA. My sequences were chosen to be otherwise meaningless. On the other hand, however, Nature has ensured that real introns—large tracts of information, or whatever else they might be—have been replicated in every cell in every species at every cell division over the eons of time since eukaryotes first made their appearance on Earth. If that were not the case, they would not have survived until the present day. Why should Nature go to so much trouble if introns were meaningless junk? As I have mentioned many times in this book, the scientific community’s natural response to new discoveries is to strive to incorporate them into existing frameworks with as little adjustment as possible, and the social pressures it can exert on those who would struggle to change the status quo can be considerable. However, Mattick points out (Mattick 2009) that Sharp’s and Robert’s discovery was: the biggest surprise in the history of molecular biology, and a finding that one might have expected would have given some more pause for thought about its possible implications and the validity of pre-existing assumptions. There are in fact, and were then, two possible alternative general interpretations of this finding: either the introns themselves are (in the main) genetically functional or they are not, each of which leads to quite different logical sequelae and predictions. However, only the latter alternative was entertained at the time, although

166

JOHN MATTICK

the former is equally plausible, is far more interesting, and has far more profound implications.

Thus, because introns do not code for proteins, it was generally assumed that they had no other functions, despite the fact that a cell’s genetic machinery continues to transcribe introns into RNA. According to the dogma, proteins are responsible for transacting genetic information, so what other functions could introns (see Figure 23) possibly have? In 1986, the American chemist Frank Westheimer announced the discovery that RNA (see Figure 24) can catalyze processes in E. coli, a discovery that led his Harvard colleague Walter Gilbert to suggest that when life on Earth began it would not have needed protein-based enzymes (Gilbert 1986). He suggested instead that one could contemplate “an RNA world” in which RNA molecules catalyze their own reproduction. In this world, the self-splicing

Exon

Intron

Gene Exon

Figure 23 Nature is the ultimate packaging and sorting expert. The uncondensed fully extended DNA molecule in every eukaryotic cell is about a meter long. DNA is accommodated in a nucleus that is only about a millionth of that in diameter; and every part of it—a gene, say—must always be readily available for access. Nature achieves this miracle by winding the DNA around proteins called histones, bobbinlike structures that also contain flags signaling which specific gene’s DNA sequence surrounds which bobbin and intricately coiling and super-coiling the resultant DNA-histone structures to fit into the confined nuclear space. The chromosome illustrated on the right of the image is packed with such DNA, a gene from which has been pulled out to illustrate its intron–exon structure. (Source: Wikipedia.)

167

JOHN MATTICK

C

Cytosine H H

N C

C C

H

C

Cytosine

H

H H

N C

N

C

C C

H

O

G

O

O

C N

N

C

N C

A

C

C N N H Uracll

H

Sugar phosphate backbone

C

H

C

C

U

C

N C

N

T

N H

H

H

C N

H C H H

O

H N C N C

Thymine H

H

C C

O C

N

N C

H

O

H

H Replaces Thymine in RNA

Nitrogenous Bases

C

N C

O H

N

Adenine

Base pair

N C N

C N

H

A H

C

N

Nitrogenous bases

N

C

N

H

H

Adenine

H

C

H

H

O

Guanine

N

C C

G

H

C

N

C

H

Guanine

N

N

N

H

C

H

N

RNA

DNA

Ribonucleic acid

Deoxyribonucleic acid

Nitrogenous bases

Figure 24 Diagram illustrating the molecular structures of single-stranded RNA and doublestranded DNA. (Source: Wikipedia.)

intron can splice itself out of an RNA molecule, a reaction that should be reversible so that an intron could also splice itself back into an appropriate nucleotide sequence. Thus, in the RNA world, introns could both remove themselves from and insert themselves into the background of replicating RNA molecules, thereby creating transposons—jumping genes—that can move exons around. In the RNA world, therefore, RNA has a major evolutionary function—recombination, which is the ability to produce new combinations of genes. In Gilbert’s RNA world, therefore, the first stage of evolution proceeds by RNA molecules performing the catalytic activities necessary to assemble

168

JOHN MATTICK

themselves from a nucleotide soup. At the next stage, RNA exons begin to synthesize protein enzymes, which, because they are more efficient enzymes than their RNA precursors, will eventually come to dominate. In Gilbert’s proposed final stage, DNA appears on the scene, “the ultimate holder of information copied from the genetic RNA molecules by reverse transcription.” At this stage, RNA is relegated to the intermediate role it has today. It is no longer at the center of the stage, having been displaced by DNA and by the chemically more versatile protein enzymes. The intron–exon structure of genes we see today is therefore a “relic,” as he put it, of DNA’s imprinting by the RNA molecules that in earlier times encoded proteins. However, Gilbert’s suggestions imply, in effect, that RNA’s evolution came to an end long ago, permanently relegating it, therefore, to intermediary roles rather than allowing it to evolve new functions. But RNA is still an extant molecular species. It seems unlikely that its evolution has been suspended when no other molecular species seems to have suffered the same fate. In Mattick’s view, introns play increasingly important roles as organisms become more complex, playing only small parts in prokaryotes and reaching their maxima in eukaryotes (see Figure 25). On this view, exonic RNA through messenger RNA (mRNA) can code for proteins, of course; but mRNA can also have non-coding functions. In addition, while the don’t-rock-the-boat conventional view assumes intronic RNA being no longer required junk, is swept away to be degraded in the cytoplasm and its ribonucleotides recycled, Mattick suggests that there may be other possibilities. Indeed, as a general rule in science, any conceivable process that on current understanding is not specifically forbidden (because it violates a well-established principle such as energy conservation, for instance) must take place; indeed if it is not observed, that understanding may have to be modified. Thus, the supposedly genetically inert introns might combine with exonic RNA not required for translation into proteins and go on to perform other functions. Hundreds of “microRNAs” have already been identified. They are derived from introns and larger nonprotein coding transcripts that time such developmental processes as stem cell maintenance, cell proliferation, and apoptosis—programmed cell death. Others surely await discovery. But, as Mattick points out, the idea that significant amounts of higher organisms’ genomes are genetically inert had been seeded in the minds of molecular biologists some years earlier by the so-called “C-value paradox,” the observation that some organisms have much more DNA per cell than seems justified by their relatively simple lifestyles and functions (Mattick 2009). Iconic examples included amoebae and onions, which have very much more DNA per cell than humans. Hence, it was thought that because increased developmental (and neurological) complexity should be underwritten by an increased number of genes, some and perhaps all organisms must contain significant amounts of DNA sequences that are nonfunctional genomic passengers. This idea led to the concept and coining of the term “junk” DNA by the Japanese-American

169

JOHN MATTICK gene DNA exon

intron transcription

RNA primary RNA transcripts splicing

assembled exonic RNA

intronic RNA

degraded and recycled

processing mRNA processing translation

proteins

other functions

microRNAs and others

non-coding RNA

other functions

other functions

Figure 25 Mattick’s proposed view of gene activity in eukaryotes published in Bioessays, Mattick (2003). (Reproduced by permission of Mattick.)

scientist Susumu Ohno in 1972. The concept received support from the discovery at much the same time that eukaryotes contained significant fractions (some 45% for the human genome) of “repetitive DNA,” specific DNA sequences repeated many times over, much of which comes from mobile genetic elements—Barbara McClintock’s transposons or jumping genes. According to Mattick, however these ideas are not necessarily wrong, but that does not mean that they are right. As he put it: The emerging view of the genome as being largely populated by genetic hobos and evolutionary debris made it easy to consign introns into the same basket, and became self-reinforcing.

170

JOHN MATTICK

However, as Mattick notes, there are many important observations that apparently do not fit into orthodox perspectives, but they have received little attention. To give only a few examples: data from the Human Genome Project (see Poster 10) has revealed that the most highly conserved genetic sequences, the so-called ultra-conserved elements, are almost all located in supposedly noncoding regions, an observation that would seem to indicate that these elements have undiscovered importance. For organisms in general, one might expect that the numbers of conventional protein-coding genes should scale with increasing developmental complexity (the C-value paradox), but they do not. The simple nematode worm C. elegans has only ∼1,000 cells, but has almost the same number of protein-coding genes (∼20,000) as Homo sapiens. But humans have ∼10 14 cells, precisely sculptured organs and muscles, and a brain with ∼10 11 neurons and ∼10 14 neuronal connections. Where does the information underpinning this extraordinary developmental complexity come from? Since worms and humans would apparently seem to have the same genetic richness, that information would not seem to reside in the genes per se; perhaps it might have epigenetic sources in the non-protein-coding regions of the genome? See Figure 26 for a photograph of Mattick taken at about this time.

Poster 10: Genome Libraries The Human Genome Project was a 13-year (1990–2003) international effort to find all the human genes and to make a complete sequence map of the ∼3 billion nucleotide base-pairs that comprise a human genome. The US National Institutes of Health (NIH) and the Department of Energy (DOE) funded most of the $3 billion Project (NIH and DOE determined some 60–70% of the human genome), but the UK’s Wellcome Trust became a major partner during the Project’s early years. Additional contributions came from Japan, France, Germany, China, and other international partners. Surprisingly, the human genome contains only some 25,000 genes—a very low figure. The completed sequence is a composite derived from many individuals (more than 100) rather than one person. It is therefore a representative or generic sequence that can be used for general biomedical studies. The Project involved more than a thousand researchers at 20 Institutes around the globe. Francis Collins, the Director of the NIH’s National Human Genome Research Institute (NHGRI) said, in introducing a special edition of Nature that was dedicated to publishing the Project’s full and comprehensive results (Collins 2006): The complete collection of papers from the International Human Genome Sequencing Consortium details the sequence and analysis of the heritage of humanity: the 3.08 billion base pairs of the DNA genome of Homo sapiens.

JOHN MATTICK

171

Of all the scientific endeavours yet attempted by humankind, historians will probably rank this among the most significant achievements.

On completion of the draft Project, it became clear that comparisons with other animal and plant species would also be important. Hence, in 2002, the International Sequencing Consortium was established: to provide a forum for genomic sequencing groups and their funding agencies to share information, coordinate research efforts and address common issues raised by genomic sequencing, such as data release and data quality.

In 2003, the NHGRI launched a new public research consortium named ENCODE—the Encyclopedia of DNA Elements. This was another heroic and global-scale initiative with the purpose of identifying all the functional elements of the human genome. Following a pilot phase to identify those elements in a defined 1% of the genome and to develop the appropriate technologies for doing so, in 2007, the ENCODE Project was expanded to include the whole human genome. Data on the genomes of the bacteria and the archaea are available from, for example, the US National Center for Biotechnology Information. These vast libraries of raw data (writing the sequences for the human genome alone would require some 2000 books the size of this one or some 750 megabytes of computer storage) are freely available to everyone. However, researchers requiring funding must still submit proposals for evaluating and interpreting this data to their funding agencies in the usual way; but that inevitably means, of course, that only those proposals deemed relevant to current priorities are likely to be selected. On June 13, 2013, the Supreme Court of the United States unanimously ruled that human genes may not be patented. The central question for the justices in the case was whether isolated genes are “products of nature” that may not be patented or “human-made inventions” eligible for patent protection. Thus, they ruled that cDNA, which contains only the exons that occur in DNA omitting the intervening introns, is subject to patent protection.

Mattick’s revolutionary proposal is that as evolution progressed, regulatory functions transacted by RNA, rather than by proteins, provided the additional sources of information that stimulated the development of increasingly complex organisms with expanded cognitive capacities. The human brain, for example, the most complex of all known organisms, is known to be a major site of RNA expression. Thus, the large majority of the human genome, which apparently does not code for proteins, may encode regulatory functions

172

JOHN MATTICK

Figure 26 Photograph of John Mattick taken in 2001 soon after he had turned his full-time attention to the functions of introns. (Reproduced by permission of John Mattick.)

transacted by RNA. Mattick’s proposal therefore rescues, so to speak, RNA from the evolutionary stasis imposed on it by Gilbert’s RNA world hypothesis and suggests dynamic and evolving new roles for this molecule. In January 2012, Mattick moved to his present post as Executive Director of the Garvan Institute for Medical Research in Sydney together with his team from the University of Queensland. Three months later, he won the internationally prestigious Chen Award for Distinguished Academic Achievements in Human Genetics & Genomic Research awarded by the Human Genome Organisation. Three months after, the ENCODE Project Consortium (2012) (see Poster 10) published a paper that said: These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation. The newly identified elements also show a statistical correspondence to sequence variants linked to human disease, and can thereby guide interpretation of this variation. Overall, the project provides new insights into the organization and regulation of our genes and genome, and is an expansive resource of functional annotations for biomedical research.

The Chen Award and the remarkable results from ENCODE would seem to indicate that Mattick’s campaign was moving in the right direction.

JOHN MATTICK

173

However, following Mattick’s Chen award, Laurence Moran, professor of biochemistry at the University of Toronto and the author of the widely respected blog, Sandwalk (named after the short country path Darwin had constructed at his home and which he frequently patrolled as an aid to quiet contemplation), began with the exclamation, “Shame on the Human Genome Organization.” He went on to say that he was “pretty sure that there’s no more than a handful of biologists/molecular biologists who believe Mattick.” He concluded with the remark, “I think the majority of biochemists now think that only a fraction (less than 20%) of our genome is devoted to making functional regulatory RNAs.” Furthermore, the ENCODE results have been disputed. A recent paper published by Dan Graur (2013) and his colleagues at the University of Houston and the John Hopkins University Bloomberg School of Public Health was openly scathing, if not contemptuous, of the ENCODE conclusions. Indeed, I have rarely if ever read a scientific paper written in such a sneering, contemptuous tone. For example, the paper concludes with the sentence: The ENCODE results were predicted by one of its authors to necessitate the rewriting of textbooks. We agree, many textbooks dealing with marketing, massmedia hype, and public relations may well have to be rewritten.

Science never accedes to democratic pressures, of course, and it would not matter if every scientist disagrees with Mattick if his ideas survive all future scrutiny. However, if Moran correctly assesses majority opinion, it will inevitably have an affect on the research that generally gets funded because peer preview and consensus now play essential roles in funding agencies’ research selection procedures. The ENCODE studies identified many novel sources of gene expression and regulation, as well as the ways such information is organized, and would therefore seem to offer substantial support for Mattick’s proposals. However, as I have tried to explain in previous chapters, even when revolutionary ideas have survived all objective assessment, it can be a long time before they are finally accepted. The flak from Graur et al. (2013) might only be the first salvo aimed at would-be changers of the status quo. But now that Mattick is at the Garvan lab, he would seem to have access to all the funds he needs to advance his crusade. Perhaps he might also inspire other maverick searchers after truth to address some of the many problems that currently beset us, and which are serious obstacles to progress.

12 Conclusions: How We Can Foster Prosperity Indefinitely

The discoveries outlined in the previous chapters indicate the character of the problem. Its global implications are enormous. Indeed, my reason for selecting the figure “500” is that it is the approximate number of scientific Nobel Prize winners in a century, each of whom, in the words of the Nobel Trust, was selected as a person, “who in the previous year, shall have conferred the greatest benefit on mankind.” It would seem imperative, therefore, that those responsible for scientific governance should ensure that this supply of unforeseeable and priceless intellectual capital should never be compromised. Unfortunately, our governors have abandoned Bush and Dale’s vision, despite the extensive evidence supporting its accuracy, and now increasingly focus on foreseeable benefit, perhaps because in the short term it is an easier policy to sell to politicians who control the public-money supply. Indeed, the very procedures that require researchers to win the advance approval of their peers are now widely acclaimed as the “gold standard” for proposal evaluation. Those who criticize these procedures risk their colleagues’ opprobrium, and, more importantly, loss of funding. But Bush, Dale, and others fought to stop us from falling into the tender trap of managing academic research by objectives, however attractive consensus deems them to be. We should thank our lucky stars that they did. Who will fight their corners today?

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

174

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

175

Those who defend current policies may say that the arguments against them do not constitute proof that they are inhibiting the creation of a twentyfirst century Planck Club. That is correct, of course. Such a proof is impossible, as no one can know today what discoveries might be made, what intractable problems from the many that face us might be solved, or what sorts of environments might stimulate those rare researchers to begin inquiries at the Great Frontiers of the Unknown and to challenge what we think is known. Before approximately 1970 we did not need this vital information, of course, because policies then gave maximum rein to individual creativity. Society cannot do more. However, those who change the status quo must be made to prove that their changes are not having adverse affects or bear the consequences. Much has been written about the current financial crisis and what might be done to ameliorate it. But everyone ignores research’s contributions and particularly the quality of that research.* Research investments nowadays are very much higher than ever, but what about the quality of research? Quality, in this context, is not related to the potential value of the goals to be achieved or to the ability of the scientists involved. For example, an investor who commissions a team of reputable scientists to find cures for, say, cancer or autoimmune disease syndrome within, say, 10–20 years would be making a poor quality investment. Advances could well be made, but these diseases are highly complex. Many billions of dollars have already been committed to such frontal attacks, with only limited success. Alternatively, investors commissioning unconstrained inquiries on, say, the nature of disease or on the regulation of growth in living organisms or on comparative studies on the immune systems of, say, invertebrates, insects, and plants might yield surprising outcomes and be high quality investments. Attitudes toward risk can also affect the quality of an investment. Research is a form of insurance against the consequences of an unpredictable future, but all research involves risk. Policy makers in funding organizations are faced therefore with the difficult questions of how that risk should be managed and, implicitly, of who should manage it. One option is to transfer this responsibility to the funded scientists who would be encouraged to manage the risks so as to promote positive outcomes. Researchers should be prepared to respond to whatever difficulties or opportunities that might emerge, but even with extensive knowledge, the uncertainties cannot be removed. Indeed, researchers should be given every possible incentive to take calculated risks—but they must do the calculations. Before 1970, governments had few policies for fostering increased levels of economic output; and, paradoxically, such inaction was the best policy that could have been chosen since no one understands the links between investment and growth. Research merely focused on the problems researchers were

* These issues are discussed more fully in Braben (2008).

176

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

interested in. The changes introduced post 1970 to academic research selection procedures—peer preview (merit review), impact—have concentrated on bringing forward a predictable future that most people would like to see. Peer preview does its best to concentrate on the “academic” merit of proposals while considerations of impact concentrate on their “industrial” aspects; their presumed potential for beneficial outcomes. Despite huge increases in research spend since 1970 and the limited successes in making those predictions happen, they have failed to stimulate the investments in the infrastructures for growth, which are necessarily very much greater than that in research itself. It would seem that the unexpected is growth’s essential catalyst, therefore; and its major source is new science and new understanding. This is science’s mysterious mojo (see Chapter 2). All we have to do is to restore the pursuit of science to its former respectability and to allow defiant youth or irreverent researchers the freedom to choose the problems that interest them. Revolutions are not required. We can continue to invest in a predictable future if a small proportion of those investments are also made in the people who can change these projections. Alex Jeffreys of the University of Leicester proposed to the Medical Research Council in 1980 to identify highly variable segments of human DNA for their intrinsic scientific interest and for their potential use in human gene mapping and medical genetics. Had Jeffreys been required to outline the potential impact of this work, he might have described its long-term potential in medical diagnostics. However, in 1980, Jeffreys was 30 years old. Then, as now, medical diagnostics was a highly competitive field. As a young postdoc, he would have had great difficulty convincing a panel that the potential impact of his work as an inexperienced young person would be significant. Luckily, he did not have to do so. In 1984 Jeffreys produced (accidentally) what proved to be the world’s first deoxyribonucleic acid (DNA) fingerprint, immediately opening up the huge new area of DNA-based identification. Jeffreys, who has since been knighted, says today that “he had no idea, at any time, where his work was leading,” nor the enormous future impact it would have on forensic and legal medicine, conservation biology, and medical diagnostics, as well as on the police, courts, lawyers, politicians, legislators, and the public. Jeffreys believes that had he not carried out this work, DNA identification systems would probably have appeared, perhaps 2 or 3 years later, but almost certainly from the US. Little if any of the substantial economic benefits and kudos the UK acquired as the pioneer in the field would have happened. Jeffreys’ discovery directly impacted the lives of tens of millions of people worldwide and created new markets valued at billions of dollars per annum. Andre Geim and Kostantin Novoselov were Russian émigrés who worked in Nijmegen and Manchester on mesoscopic physics. In Nijmegen, Geim discovered that 5-cm water droplets and geckos can be made to levitate by exploiting their mesoscopic diamagnetism, a discovery that won him, and Michael Berry from the University of Bristol, the 2000 Ig Nobel Prize for

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

177

Physics* and, incidentally, as they were asked whether they would accept the prize, demonstrated their sense of humor. Moving to Manchester in 2001, Geim and Novoselov discovered graphene, a new two-dimensional state of carbon. They won the 2010 Nobel Prize for Physics for this discovery. Geim said in his 2010 Nobel talk (Geim 2013): Whenever I describe this experience to my colleagues abroad, they find it difficult to believe that it is possible to establish a fully functioning laboratory and a microfabrication facility and without an astronomically large start-up grant. If it were not from my own experience, I would not believe it either. Things progressed unbelievably quickly. The University was supportive, but my greatest thanks are reserved specifically for the responsive mode of the UK Engineering and Physical Sciences Research Council (EPSRC). The funding system is democratic and non-xenophobic. Your position in an academic hierarchy or an oldboys network counts for little. Also “visionary ideas” and grand promises to “address social and economic needs” play little role when it comes to the peer review. In truth, the responsive mode distributes its money on the basis of a recent track record, whatever that means in differing subjects, and the funding normally goes to the researchers who work both efficiently and hard. Of course, no system is perfect, and one can always hope for a better one. However, paraphrasing Winston Churchill, the UK has the worst research funding system, except for all the others I am aware of.

With friends like these, which research funding system needs enemies? Geim and Novoselov are now among the most lavishly funded scientists in the UK, which is as it should be; they have earned the right to be financially secure. The Government has supported a new National Graphene Institute in Manchester, which will be completed in 2015 at a cost of £61m, of which £38m comes from the research councils. But the money for most of this largesse comes from the basic research fund, from the very fund that supported them initially before they became famous. This means, of course, that there is now even less money available to fund new ideas. This is not a matter for them but for the Research Councils. In an interview with The Independent, published on March 20, 2013, Novoselov said: Scientific breakthroughs are becoming more difficult in Britain because of the pressure on scientists to demonstrate that their research has practical benefits before it is funded. . . . You start to see more and more forms that ask you about

* The Ig Nobel Prize is organized by the journal, Annals of Improbable Research. They honor achievements that “make people laugh, and then make them think.” The prizes are intended to celebrate the unusual, honor the imaginative—and spur people’s interests in science, medicine, and technology. Every year, in a gala ceremony in Harvard’s Sanders Theatre, 1200 “splendidly eccentric spectators” watch the winners accept their prizes from genuine Nobel Laureates.

178

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

the benefit to society from your research, and so on. It’s very hard to determine the benefit to society because science deals with something that is unknown. Another problem is that scientists begin to feel ashamed of negative results, which wasn’t the case a few years ago. Negative results are often as important as the positive results. The current system doesn’t tolerate failure. The situation is getting worse. The pot of money allocated to science is not increasing fast enough, or is even under the threat of being shrunk. But you are also seeing more and more strings attached to this money, which they shouldn’t be.

Such leverage—the difference between the sum the researchers see and the Research Council contribution—is now common. The Research Councils apparently see themselves as providers of long-term research funding for industrial users who cannot, or will not, provide all the funding for it themselves. Thus, the Councils will support in the national interests the research that might meet their demands. The EPSRC lists more than a hundred priority areas (in 2013). But who will support the research for which there is yet no demand, such as the pioneering work outlined in the earlier chapters? Had Bush and Dale failed to win their battles, and had similar strategies to EPSRC’s (which are not atypical today) been adopted in 1950 instead, would they have given rise to the great postwar discoveries we now find indispensable, bearing in mind that many of them were iconoclastic and even the very words used to describe them would then be unknown? The present usage of such words as “laser,” “MRI scanning,” and “personal computer,” for example, were unknown before the 1970s. Who would sponsor Kendrew and Perutz in their heroic 25-year struggle to understand protein molecular structure, much of it spent on their own admission without any progress, but the results of which today underpin drug design in the pharmaceutical industry and much else? What obligations could Planck-Club scientists in general have accepted in all honesty when they initially had no idea where their work might lead or the significance it might have? Much more importantly, however, how many Planck-Club proposals would have survived the current mandatory filter of peers’ expectations, especially if they were specifically charged to look for national benefit? Let us call this question the Planck Test. Answers will be qualitative and subjective, of course, and many might dismiss them as merely matters of opinion on the proposals that will or will not succeed. Almost universally today, peers are routinely required to make predictions of future performance, a notoriously difficult task with a highly dubious record even for industrialists; but nevertheless these qualitative and subjective judgments have been given a crucial role in selecting academic research to be funded and ditching the rest. Indeed, as mentioned earlier, they have been accorded “gold-standard” status. Funding agencies might therefore consider using the Planck Test for assessing their policies. It might also help them keep Bush and Dale’s visionary thinking at the front of their minds.

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

179

On the debit side, there seems to be a growing opinion that new sciences lead only to new ways to exploit or control people or to wage war. There is ample evidence to support that view. Remotely controlled pilotless aircraft with effectively unlimited endurance can secretly study our movements or destroy small targets thousands of miles away—death and destruction at the touch of a button. The vast increases in internet and telephonic communications can be used to track almost every facet of what we do, what we buy, where we go, what we say, and who we meet. Hidden cameras in almost every city in the world observe and record our public movements 24 hours per day. All these developments are new. Few of them would have been possible even 30 years ago. The threat from nuclear weapons is, of course, much older and is still very much with us as more nations press to acquire nuclear capability. New sciences are not directly responsible for any of this, of course. New sciences open up new ways of thinking and new philosophies, and they make new types of things possible. Society—through its institutions, governments, commerce, industry, and every one of us—decides how the new knowledge will be used. Science merely provides options, therefore. But what will happen when the supply of new sciences dries up? Once upon a time, the word “research” usually implied the possibility of a surprising and unpredictable outcome. Increasingly over the past few decades, funding agencies have decided that relying on elements of surprise can be inefficient, and they have increasingly turned to scientists for their ability to solve problems according to a timetable. Thus, agencies tend to focus on research that might have the greatest potential “impact,” that is, which aims to deliver outcomes consensus deems attractive—the universal methodology nowadays for keeping researchers honest, keeping ivory-tower-ism at bay, and ensuring that taxpayers’ money in not wasted. As Samuel Johnson might have said, “the success of such a policy requires the triumph of hope over experience.” Work within such a framework should not really be called research; it is development—the deployment of ideas within existing ways of thinking. One might say that this is a social problem, but our difficulty in advocating Venture Research has always been that the precise nature of what we are being deprived of cannot be described in advance. We cannot know what a modern “Einstein” would do, only that he—or she—would change the way we think about an important problem in some substantial way. One must have faith in what an unconstrained future might hold. We must therefore find imaginative ways of liberating creative academics from the paralyzing grip of bureaucracy and short-termism. The economic situation today is hardly worse than it was in 1945–1950. International funding bodies, such as the European Research Council, might help. Many of them are new and designed to cover the widest ranges of intellectual ground; but sadly, a need for peers’ opinions on what is important and not is usually built into their foundations. National agencies—public and private—should also be a good bet as there is only one set of scientific governors to convince of

180

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

the importance of these problems, but few if any seem confident enough to trust scientists with the total freedom they once routinely enjoyed. Furthermore, private funders are increasingly maximizing the efficiency of their giving and targeting the problems that seem to beset us. They rarely select the people who might want to radically change what is thought and what is done. A component of a solution, therefore, might be for our governors to recall that rigorous academic appointment procedures can, in principle, offer substantial protection for the public purse against profligacy and waste because appointees have necessarily had to prove to their peers that they are worthy of trust. However, I add the cautionary words because so many appointments nowadays are made not for a person’s panoramic potential but for some specific expertise and, in particular, for their proven or anticipated ability to attract funding—the more the better. In former times, academics were appointed for the quality of their intellects and were free to follow any line of research they thought fit. While these noble aspirations were not always achieved, universities appointed a sufficient number of inspired and uninhibited explorers to ensure the Planck-Club harvest. That should be the task of at least some academic appointment-boards and at least at some universities in the future. However, as I have said previously, major changes are not required. The methods used to assess research proposals all over the world work well enough for most—probably more than 95%—of the time. In any other field this could be regarded as triumphant success. But as I have said repeatedly, science is very different from other subjects. It is not science’s task to solve human problems, but to provide new options for an unpredictable future, both social and economic. It is only in the past few decades that its raison d’etre has changed, and it is now seen universally as the source of economic growth for a future that has suddenly become foreseeable. Thus, politicians can proudly proclaim that every dollar, pound, or euro (and increasingly many other currencies, too) spent on research in their neck of the woods must serve the national interests in terms of jobs created or opportunities won, and the universities must take the lead. In these circumstances, it is a mystery why every nation does not invest the same or comparable amounts in research, instead of the wide ranges of support given in Table 1 in the Introduction. The problems come from the management of research enterprise, of course, and particularly academic research. For much of the twentieth century, academics were free to do as they pleased for only as long as it was seen as unimportant. Their support had always been meager; they were also educating the next generation, so managements had no reason to intervene. When the penny finally dropped around about 1970 and an awareness of what the universities were capable of doing finally penetrated the minds of putative managers, they did something about it. They then took the fatal steps towards the current system of management by objectives, which apart from details on the

181

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

overall amounts invested, is now more or less the same everywhere. It appears that the future can be managed after all appears. However, if we take the average of the 10-year growth figures in gross domestic product (GDP) for 2002–2011 for France, Germany, the UK, the US, and China, we see that the “western” countries have all grown at the same rate to within ±0.3% per annum (see Table 5). China, included as a representative of a developing country playing “catch up” to use Maddison’s delightful phrase, has grown substantially more. Should these facts be surprising? If we take the western countries, have these marginal improvements been worth the pain inflicted on academics of having their research selected, without exception, against such criteria as what their closest competitors think about their proposals, prospects for innovation and the like? The changes have been produced partly by the political need to considerably expand the numbers entitled to a university education. This has involved creating new universities and expanding the academic sector. But surely all this could have been achieved without compromising the many unpredictable advances academics made in such fields as nuclear physics, the maser–laser, and the avalanches of discoveries and developments seen in the biological sciences. My own recipe for much greater freedom is founded on Venture Research, an initiative that seeks to identify new areas of research that are currently serious obstacles to progress. It does not use deadlines, peer preview, milestones, or any of the now-ubiquitous criteria used by the conventional funding agencies. Applications are easy to make and are considered by face-to-face discussion between scientists. Our selection criteria strive to establish mutual trust. Why should they reveal what they really wanted to do if we were going to share that personal information with their closest competitors, as conventional peer preview requires; they would only do that if they trusted us. We also aim to give researchers maximum freedom; we would only do that if we trusted them. We—non-competing scientists—made the selections; we created the critical and trusting environments within which scientists could argue for as long as it takes that they should be funded. The research we selected was initially well outside the current mainstreams, and scientists worked on the most basic and fundamental problems chosen by themselves. We merely

Table 5.  Annual Percentage Growth in GDP, 2002–2011 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 Average France Germany UK US China

0.9 0.0 2.4 1.8 9.1

0.9 −0.4 3.8 2.6 10.0

Source:  World Bank 2012.

2.5 1.2 2.9 3.5 10.1

1.8 0.7 2.8 3.1 11.3

2.5 3.7 2.6 3.7 12.7

2.3 3.3 3.6 1.9 14.2

−0.1 1.1 −1.0 −0.4 9.6

−3.1 −5.1 −3.5 −3.5 9.2

1.7 4.2 1.8 3.0 10.4

1.7 3.0 0.8 1.7 9.3

1.11 1.17 1.62 1.74 10.6

182

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

monitored their progress, and kept in touch. We also organized annual conferences—feats in themselves as we covered so much disciplinary ground. We had to learn how to do these things, and an important factor was the abolition of the concept of the “stupid question”—there are only stupid answers! We also stimulated exchanges of views and backed the new collaborations that emerged. The first tranche of research proposals under the Venture Research aegis were made during the period from 1980 to 1990 and were generously financed by BP. This was a crucial period for the initiative. BP allowed complete freedom for our small team to develop the successful modus operandi and the confidence to apply it. BP’s support during this period was exemplary. The total cost of the program was some £15 million in money of the day, including all BP and university overheads over the 10 years. The approved program was discussed in my book Scientific Freedom: The Elixir of Civilization. The book focused on each of the 26 proposals active when our sponsor—BP—suddenly began to adopt an uncompromising policy focusing on its core businesses, and without any warning withdrew its support. The book gave my personal assessment—endorsed by each of the researchers—of what had been achieved, and its significance. Each of the proposals (except one) had been turned down by peer preview; we were always the last hope that a desperate researcher would have. One would expect in these circumstances that success would be strictly limited, as all conventional determinants of “quality” had been ignored or set aside, but in fact we were very successful. Written in 2008, some 18 years after BP’s untimely and ill-advised decision to close us down, it recorded that 13 proposals had achieved breakthroughs; that is, the researchers had actually succeeded in radically changing the ways we think about important subjects (see Table 6). In February, 2011, Times Higher Education published a list of the world’s 100 top chemists based on citation records for the past decade. Of those 100, 70 are from the US. They show, unsurprisingly, how Americans dominate charts that indicate their interests. If your country is as big as America and you invest substantially, whatever you are doing will attract followers. The top UK chemist (and at 46 in the list) is Ken Seddon from the Queen’s University of Belfast, who was a Venture Researcher. A young Ken Seddon had first submitted the proposal on ionic liquids we approved in 1988 to the precursor of the EPSRC. Their peer previewers rejected it with a “gamma” rating, the Council’s lowest, and did not allow resubmissions. Seddon brought it to us, and his subsequent career was founded on that proposal. The second UK chemist is John Holbrey, a colleague of Seddon. Ironically, Seddon’s department also scored top in 2011 in terms of its research “impact”! The UK Science Museum voted Ionic Liquids the top invention to shape the twenty-first century, which took 33% of the more than 50,000 votes cast. The second place with 22% of the votes went to “Raspberry Pi,” an initiative designed to considerably increase computer use among the young, in which my son David plays a leading role. Graphene was fourth with 11% of the votes cast.

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

183

Table 6.  Transformative discoveries by Venture Researchers listed alphabetically (taken from Scientific Freedom: The Elixir of Civilization, Wiley, 2008) Mike Bennett and Pat Heslop-Harrison Plant Breeding Institute Cambridge Terry Clark University of Sussex Steve Davies University of Oxford Nigel Franks, Jean Louis Deneubourg, Simon Goss, and Chris Tofts Universities of Bristol and Edinburgh Herbert Huppert and Steve Sparks University of Cambridge Jeff Kimble Caltech Graham Parkhouse University of Surrey Alan Paton, Eunice Allen, and Anne Glover University of Aberdeen Martyn Poliakoff University of Nottingham Ken Seddon Queen’s University of Belfast Colin Self University of Newcastle Gene Stanley and José Teixeira Boston University and Laboratoire Leon Brillouin, CEA-CNRS, France Harry Swinney, Werner Horsthemke, Patrick DeKepper, Jean-Claude Roux, and Jacques Boissonade Universities of Austin and Bordeaux

Discovered a new pathway for evolution and genetic control Pioneered the study of macroscopic quantum objects Developed small artificial enzymes for efficient chiral selection Quantified the rules describing distributed intelligence in animals Pioneered the new field of geological fluid mechanics Pioneered squeezed states of light Derived a novel theory of engineering design relating performance to shapes and materials Discovered a new symbiosis between plants and bacteria Transformed Green Chemistry Transformed Green Chemistry Demonstrated that antibodies in vivo can be activated by light Discovered a new liquid–liquid phase transition in water that accounts for many of water’s anomalous properties Developed the first laboratory chemical reactors to yield sustained spatial patterns—an essential precursor for the study of multidimensional chemistry

Following BP’s precipitous closure of the initiative in 1990, success in finding a replacement sponsor proved exceptionally difficult. A new company, Venture Research International Ltd (VRi), was founded in 1991. Despite a board that included the UK government’s Chief Scientific Adviser, Sir John Fairclough; Iain Steel, a retiring member of BP; Nigel Keen, an executive with close relations with BP; and an advisory committee that included a future president of the Royal Society, then Sir Martin Rees (later Lord Rees), we were unsuccessful in raising funds for a new initiative. Investors, then as now, were interested

184

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

in solutions to problems that would make them money as quickly as possible, and the specific short term was the limit to their temporal horizons. It did not matter that other investors were considering these same problems and considering solutions that were also very similar. It did not matter that we were offering the prospect of solutions to problems that were as yet unimagined by investors, as we might have been in, say, 1900 when researchers were free to do as they wished. One of VRi’s unspoken selection criteria concerns the likelihood that the putative Venture Researcher might win a Nobel Prize if the proposed research were successful. VRi (and also the earlier Venture Research initiative) would go forward only if that seemed a reasonable assumption. However, prizes are a lottery. No one can know who will win or who will be just outside the winners. We were offering investors an opportunity to take a slice from the distant future, but either they were not interested or did not believe that we could deliver. That offer was made to an American entrepreneur, Barry Silverstein, in 2005. He had approached me shortly after Pioneering Research had been published to propose that we collaborate in a new initiative. He was supported by the Ansaris, an American-Iranian family, Anousheh, Amir and Hamid Ansari, who had previously backed the successful X-Prize, a $10 million prize awarded to Burt Rutan and the Mojave Aerospace Ventures team, when their three-person reusable spacecraft, “Space Ship One,” flew twice into space (above 100 kilometers from Earth) within a 2-week period. We met for dinner, all six of us (as Silverstein was accompanied by his wife Trudy) in London aboard The World, a gigantic ship moored at Greenwich en route for the next stop in the Caribbean and in which Silverstein had an apartment. Initially, we discussed a collaboration that would look for early gains, following which we could go for the full Venture Research approach. He invited us to his apartment after dinner, and he showed me his bookcase with some 30 copies of my book—his approach was certainly serious. We met again three days later, but this time there were just the two of us. His ideas had changed. He wanted me to advise him on the X-Prize route, to help pick some “juicy” targets to back that would yield substantial profits. But everyone is looking for these shortterm winners. He could ask almost anyone about the most likely candidates, and their input would be worth as much as mine. On the other hand, the Venture Research approach was unique; there would be no competitors. We continued our discussions by correspondence for some months, but he was unwilling to change his mind. We wished to use our extensive understanding of academic research enterprise to create a world-class initiative at relatively modest cost. Instead of concentrating on fields, it would use a proven and successful methodology to select the people whose work is likely to make a substantial difference. In the Arabian Nights, a tale that has endured for centuries, Scheherazade would not have survived had she supplied only what was expected, as all her many predecessors had done and had consequently paid the ultimate price. Exception-

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

185

ally, by a process of procrastination and delay she was able to appeal to King Shahriyar’s imagination and sense of wonder. With sensitive selection and management, science, too, can supply the unexpected. In 2008, I accepted David Price’s argument that we should concentrate our efforts to raise the funds for a new initiative on one university—our own. David Price is Vice-Provost (Research) at University College London, and had been the academic who as head of the UCL’s Earth Sciences Department had in 1995 appointed me to my present position, that of visiting professor. I accordingly made a suggestion to the Provost, Sir Malcolm Grant, for a Venture Research initiative that would be restricted to members of the institute. I also made the observation to him that in a university, core business is the provision of intellectual freedom for its academics; nothing else matters. The Provost asked about operating costs—what would be the likely cost for a full 3-year term? I replied that I had no idea; one cannot know what will come. However, if someone succeeded in satisfying the exacting conditions set, it would be a maximum of about £300,000 over 3 years. The Provost accepted my proposal, and an announcement was made on December 11, 2008. After some 18 years of struggle and disappointment, we finally had a Venture Research initiative. It had been a long wait. I was also “promoted” to Honorary Professor in Earth Sciences. We had some 40 proposals in the first few months. Some 10 were from other universities, and I had no choice but to decline them. (No other vice chancellor accepted my offer to help them to set up a Venture Research initiative.) But there was one proposal from Nick Lane, who was then an Honorary Reader at UCL, which caught my eye immediately. He said: The Human Genome Project is a misnomer. There are two human genomes: nuclear and mitochondrial. I propose that functional mismatches between these two genomes are more important than variations in either, and that this has huge but currently overlooked health consequences. .  .  . Mitochondria are far from “just another organelle” with a few genes but are the key to eukaryotic cell physiology. Mitochondria passed almost all their genes to the nucleus, yet all mitochondria in all species retain a small core genome of similar genes. I believe these genes have been retained to control respiration, a benefit that outweighs the grave consequences of having two genomes in every cell. . . . But mitochondrial genes mutate hundreds, even thousands, of times faster than nuclear genes, setting up a serious selective strain that must be resolved every generation. How this is achieved is unknown. .  .  . Above a threshold, high free-radical leak is known to trigger apoptosis directly, providing a physical mechanism for selection to act. Below the threshold, I postulate that high free-radical leak optimises respiration and physical performance by redox modulation of gene expression. High free-radical leak therefore improves performance when genomes are poorly matched. . . . I propose to establish that (i) mismatched genomes do indeed have serious health consequences and (ii) these consequences are mediated by freeradical leak that is beneficial in the short-term.

186

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

What a relief to have a proposal that did not mention specific objectives! We accepted Lane’s proposal in 2009 at a cost of £50,000 a year, giving him 3 years of Venture Research support for a largely theoretical study. He chose to work on the evolutionary importance of biological energy conservation, specifically the process of “chemiosmosis,” by which cells generate energy in the form of adenosine triphosphate by way of a gradient of protons across a membrane (see Chapter 9). Called “the most counter-intuitive idea in biology since Darwin,” its evolutionary significance had received little attention. He proposed to focus on three major transitions in evolution: the origin of life itself; the origin of the complex (eukaryotic) cell; and the evolution of the major traits shared by all eukaryotic cells—in particular sex, sexes, speciation, and senescence—and hypothesized that chemiosmosis played a critical role in each transition. Many current explanations were superficial or plain wrong; but the fact that it is still possible to argue, for example, that eukaryotes arose as early as 3.5 billion years ago or as late as 0.8 billion years ago, gives an indication of just how difficult it is to know anything with much confidence about the deep evolutionary past. See Figure 27 for a photograph of Nick Lane taken at the author’s home. During the 3 years, he published some 17 peer-reviewed papers, including theoretical papers in Nature, Science, and Cell. Four others were in preparation. Its highlight was published in Nature in a prestigious “Hypothesis” paper (Lane and Martin 2010) with Bill Martin from Düsseldorf University, in which they showed that prokaryotic genome size is constrained by bioenergetics. The endosymbiosis that gave rise to mitochondria restructured the distribution of DNA in relation to bioenergetic membranes, permitting a remarkable

Figure 27 Photograph of Nick Lane (left) and Don Braben (right) at Braben’s home in 2012. (This photograph was taken by Ana Hidalgo and used with permission.)

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

187

∼200,000-fold expansion in the number of genes expressed in eukaryotes. This vast leap in genomic capacity was strictly dependent on mitochondrial power and was the prerequisite to eukaryotic complexity. It was also the key innovation en route to multicellular life. Lane was appointed senior lecturer in UCL’s Department of Evolution, Genetics, and the Environment in 2012 and was promoted to Reader later that year. He had applied for and won a conventional grant from the Leverhulme Trust at the end of his Venture Research support. He was now successfully launched on his academic career. The success of Lane’s work led the Provost to renew the scheme, and new applications are now being considered. The situation in the US for the support of unconstrained research is no better than in the UK. In 2006, the National Science Board, the governing body for the National Science Foundation (NSF), set up a task force, on which I served, to inquire into the possibility of supporting a Transformative Research initiative. It had concluded that a small proportion of NSF funds should be allocated to the initiative and also had suggested that the NSF relax its funding criteria to accommodate it. The Transformative Research scheme has now been running for some 4 years, but NSF declined our suggestion to have a single initiative across the Foundation, preferring to use the existing structures to consider proposals and, furthermore, insisted on using peer preview (merit review) to assess them. Nothing substantial was changed, therefore. In response to Congress’s comments on the NSF’s merit review process, in May of 2013, the following letter was sent from the Coalition for Science Funding, a body comprising some 110 organizations, no less, to the members of the House Committee on Science, Space, and Technology and the Senate Committee on Commerce, Science and Transportation: The undersigned organizations are concerned about recent Congressional actions that call into question the National Science Foundation’s merit review process for awarding research grants. NSF’s merit review process relies upon the expertise of leading scientists and engineers, and it has a proven track record in supporting outstanding, fundamental research across all disciplines of science and engineering. Indeed, NSF’s expert merit review process is a model for identifying research projects that are worthy of taxpayer-funded support and have the best opportunity to advance science and innovation. If the criterion for awarding grants shifts away from scientific merit as the primary goal, the quality of research proposals will suffer, as will the science and engineering that is ultimately funded. This would have negative impacts on our nation’s entire research and innovation enterprise. . . . Funding basic research in all NSF-supported disciplines should be a national priority. Support of this goal should not force significant and potentially detrimental tradeoffs between one field of science and another. In creating NSF, Congress removed its political influence from the evaluation and selection process for awarding research grants, establishing a peer-review process to determine the best candidates for research funding. While past Congresses and administrations have at times identified areas of science for funding emphasis at NSF

188

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

such as in nanotechnology, robotics, information technology, and cybersecurity, they have prudently and appropriately allowed these research areas to be informed by priorities set by the National Science Board and other national scientific advisory committees and to be guided by the science community through a strong system of merit review. . . . It is imperative that NSF’s system of support for basic research be based upon excellence, competitive scientific merit, and peer-review. While Congress does play an important role in oversight of federally funded research, it should avoid legislative attempts that could undermine a decades-long system of success and ultimately impede discovery and innovation.

The letter was stimulated in March 2013 when, at the urging of US Republican Senator Tom Coburn, Congress halted funding for political science— “except for” research that the agency’s director certifies as “promoting national security or the economic interests of the United States.” This extra test might not stop with political science, and could probably be extended to other disciplines. Indeed, a draft bill from another Republican senator, Lamar Smith— the High Quality Research Act—is currently before Congress. It states explicitly that the NSF Director must certify “that taxpayer-funded research projects are of high quality and benefit the American people.” The funding criterion, “promoting national security or the economic interests of the United States,” should not be applied to any field of research. This has now been standard practice in the UK for more than a decade, and it should not be allowed to spread. Money is certainly short in the UK. Investment in R&D has been around 1.8% of GDP for years. The equivalent figure is ∼2.8% in the US and ∼2.9% in Germany. In Sweden it is 3.6%, while in Israel it is a staggering 4.2%. However, the US GDP is ∼6 times the UK’s while Germany’s GDP is almost 50% greater than the UK’s. Until approximately 1970, research funding was a fraction, per researcher, of what it is today; yet between 1945 and 1989, UK– based scientists won some 19% of all scientific Nobel Prizes. (The gestation period for a Nobel Prize is typically 20 years or more.) The UK’s tally of winners has since dropped by about a factor of two (see Table 7). University appointment boards today are weighed down by pressures of pragmatism and look increasingly for the people most likely to improve a department’s position in the perpetual fight for funds. But funding agencies’ overriding concerns with the predictable can lead to mediocrity and wasteful Table 7.  The percentages of all scientific Nobel Prizes awarded to researchers based in the UK, Germany, and the US since the inception of the Nobel Prize Years

UK

Germany

US

1901–1944 1945–1989 1990–2010

14.4 19.0 9.9

29.1 7.3 6.6

12.9 48.6 62.0

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

189

competition. Surgeons, for example, do not have to prove every time they propose to operate that they are the best persons available for the job, but academic researchers must do so today. As a start, therefore, we could arrange that some appointments carry similar license to practice as surgeons and others have, together with modest funds guaranteed for extended periods. Such a change would begin to eliminate swathes of bureaucracy and its attendant costs and frustrations. The motivational boost would be tremendous. For instance, say a country has 100,000 academics, but there is funding for only 25,000. How should the selection be made—that is, which research should get funded? At present, the universal answer to this question would be to invite proposals from all 100,000 and use peer preview to select what it considers to be the best 25,000. Worldwide, every “100,000 unit” of academic research would qualify for funds in every country. Priorities would be roughly the same everywhere, and the resultant global duplication would be considerable. Another potential solution, rarely if ever used today, would be to invite all 100,000 to apply for “qualified scientist” status, which would be the same as the standard set for normal academic appointments, and to divide the available funding equally among them. However, this option is not viable: there have been too many changes to university staffing structures in the intervening years. Universities today are not what they were, and they place huge obstacles in the path of any reform. Progressively over the past few decades, vast numbers of additional professional staff have been appointed without responsibilities for academic output. Administrative roles include the provision of a host of services, including payroll, library, and pensions, but in recent times new roles have been added, including that of overseeing “compliance,” the need to prove that every step is taken to ensure that applications for funds satisfy the funding agency requirements, and any successful acquisition of funds is being sensibly and properly used. In attempts to improve success rates, universities (at least in the UK) are given the power to block what they consider to be unsatisfactory applications, thereby removing them from the statistics. They simply do not count as applications (see Table 8). However, data are hard to come by for particular universities, and one must look at the overall figures in order to form a picture. In the UK, data from the Higher Education Statistics Agency records in 2010/2011 that the universities had some 181,000 academic and some 197,000 nonacademic staff. In the US, the Department of Education reports that expenditure on nonfaculty professional employees per 100 faculty almost doubled between 1976 and 2009, to reach rough parity today. The EPSRC is the largest of the UK’s Research Councils—the others have similar policies. In 2011, EPSRC changed the way it considers research grants in its Delivery Plan, 2011–2015. The Council will take the steps to embed impact into normal university thinking and to encourage researchers to inform, explain, and seek the views of the public at every stage of the research process. They will also “enable the most creative and potentially transformative

190

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

Table 8.  Some EPSRC research-grant success rates, 2004–2011 Number of Research Grants Considered

Period April April April April April April April April

2004–March 2005–March 2006–March 2007–March 2008–March 2009–March 2010–March 2011–March

2005 2006 2007 2008 2009 2010 2011 2012

5042 5138 4346 4758 4344 3379 2568 1938

Success Rates 32% 29% 32% 30% 26% 30% 36% 41%

or or or or or or or or

34% 33% 38% 32% 27% 32% 36% 46%

by by by by by by by by

value value value value value value value value

Source:  EPSRC (2012).

research to flourish by increasing the proportion of flexible, longer and larger awards which emphasize creativity and long-term vision in research activity.” Manufacturing the Future is an initiative that builds on existing investments, and will “focus on skilled leaders, sustainable manufacturing and training more researchers.” All proposals will continue to be screened by peer preview. These policies are all defendable. It is crazy that a nation invests in research, and then allows the results to gather dust. But it is the responsibility of the manufacturing industry and its proxies to ensure that its supply of research is adequate. The academic sector is being turned progressively into the servant of industry, and the UK has abandoned an interest in an unpredictable future that it was influential in pioneering before about 1970. In 1945, the world’s two leading scientific nations—the US and the UK— naturally took the lead in protecting freedom’s role in the aftermath of war. Today, scientific enterprise is more complex and extended, and other nations such as China or India might be better placed to express their leadership and individuality. There is little doubt that slowly and progressively scientific policies have drifted away from the Bush-Dale ideals, and the intellectual barriers to restoring them are now substantial. Nowadays, it seems to have been accepted that pragmatism should rule. But science is perhaps the only human endeavor dominated by absolutes. Scientists strive to understand some aspect of the universe in its many manifestations and complexities, all of which, of course, are indifferent to human institutions or values. The rules governing scientists should therefore be as absolute and as free from considerations of survival as we can make them. If the Bush-Dale policies were once proved successful, then we must conclude that we—society—have hit upon a viable modus operandi by which humans can deal with absolutes. Moreover, it creates considerable benefit for humanity. It should only be changed for reasons that also have absolute bases. These amazing facts are generally ignored. There are, of course, no reasons to believe, as Maddison half-heartedly suspected, that humanity has reached

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

191

the limits to the advance of knowledge and that all the “easy” discoveries have been made. There is so much ignorance that it is almost inconceivable that the proportion we understand will not change radically if scientists are free. The changes required would be negligible, but the system is resistant to any changes, and the necessary small adjustments will be exceptionally difficult to achieve. As I have said repeatedly, change is only required at the margin where great discoveries are made. There are very few of those—perhaps some 500 in a century worldwide—which indeed are the discoveries made by Planck Club members. However, scientists must be completely free of any pressures to make them. Some or all of the following are needed: •







Venture Research initiatives along the lines of the BP-funded model.  The costs would be some £100–£200 million for a 10-year program, but it would take some years to reach these spending levels. (See the discussion in my Scientific Freedom: The Elixir of Civilization, Wiley, 2008) Venture Research initiatives along the lines of the UCL model.  The costs would be hidden, as they would be included in the university chief executives’ discretionary budgets. Universities should relax their grips on appointments, appoint some academics with similar terms of reference as surgeons, for example, and give them modest resources to use as they please.  This system could operate within the existing arrangements, but it would need additional funding. Select the universities in which all academics were appointed on merit, given modest resources, and set free to study whatever they wished.  It is by far the most expensive option. The costs would be roughly the same as a university’s operating costs multiplied by the number of participating universities. However, few universities would be required, and they would need fewer administrators, as there would be no activities to comply with, for example. It is preferred because scientists would not have to declare their intentions in advance, and could simply get on with their challenges without notifying anyone.

I was born on May 29, 1935, so I am now too old to take on new responsibilities. I am not therefore pleading for myself. We need investors like Bill and Melinda Gates to respond to such challenges and people who share my vision to staff them. Their task would be to liberate human creativity from the current strangleholds that bind it. In summary, the rapid growth in world per capita production since 1900 owes its origins to the work of a very few—probably about 500—pioneering scientists who came mainly from Europe and North America, which is a situation that should change. Most of them were academics, and some of their work was described in this book. They discovered completely new facts about the universe we inhabit, and those discoveries enabled many others to develop

192

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

the new technologies that transformed living standards globally. These scientists worked in trusting environments that have almost disappeared. From approximately 1970, scientists have increasingly been required to justify their future research to their peers—their closest competitors—usually in terms of profitable outcomes before they will be funded. These changes have almost killed the latent ability that many academic scientists have for turning their minds towards the big unanswered questions, which we have in abundance. So, the opportunity exists: it simply cannot be pursued. Consequently, scientistsenforced conservatism substantially reduces the probability of major discoveries and contributes to the problem of falling growth. Unless we can restore complete unqualified freedom to the few academics that can use it, our outlook will be bleak. The problem is who to fund, and I have outlined a few possible lines of attack. Their numbers are small. Among other things, they represent an ideal opportunity for rich investors, but they must realize that the work of the few they will choose to support cannot be predicted or controlled. John Sulston (see Poster 5), who shared with Sydney Brenner and H. Robert Horvitz the 2002 Physiology and Medicine Nobel Prize awarded for their discoveries concerning “genetic regulation of organ development and programmed cell death,” wrote the following comment on this book’s approach to the concept of “impact” as currently practiced by virtually every government funding agency: The most serious issue facing humanity today is that the earth cannot afford continued economic growth involving ever increasing material consumption. In principle we can decouple economic from material growth, but are as yet unable or unwilling to do so. But anyway there is no intrinsic reason that we need economic growth at all, apart from a perceived inability to manage an equitable non-growing economy. With or without impact statements science in its current form is making matters worse, because it is more and more paid for by profit seeking investment, which drives growth. On the other hand science is the most important thing that humans do culturally, because it is the best way to explore and enlarge our understanding. The problem is to decouple discovery from growth, when most people feel that’s the whole purpose of the exercise. There is a useful point of alignment with your agenda, in that intellectually free universities are good places to explore new political social and economic models.

Other scientists have commented similarly on the need for growth; but as Sulston would agree, capitalism demands it; there is currently no alternative. I will let Gregory van der Vink have the last word. In an Editorial in Science, May 23, 1997, he said: It starts with universities, where success has historically been achieved through specialization in narrow subdisciplines. Courses for nonmajors are frequently viewed as distractions, and students who depart the so-called nerd herd to pursue careers in business or policy-making are frowned upon. Thus begins the vicious

CONCLUSIONS: HOW WE CAN FOSTER PROSPERITY INDEFINITELY

193

cycle: Bright students do not see science as a way to reach positions of leadership, and science suffers because those in leadership positions have little experience with science.  .  .  . Our long-term future depends on citizens understanding and appreciating the role of science in our society. No panel report, no unambiguous example, and no well-connected lobbyist can make these arguments for us. In the next generation, we will need not only scientists who are experts in subspecialties, but also those with a broad understanding of science and a basic literacy in economics, international affairs, and policy-making. In the end, our greatest threat may not be the scientific illiteracy of the public, but the political illiteracy of scientists.

Appendix 1: Open Letter to Research Councils UK from Donald W. Braben and Others Published in Times Higher Education, November 5, 2009

The research councils have decided that proposals should include a plan of their “potential economic impact” a term that they stress embraces all the ways in which research-related knowledge and skills could benefit individuals, organizations and nations. Peer reviewers will be asked to consider whether plans to increase impact are appropriate and justified, given the nature of the proposed research. However, academic researchers are primarily responsible for the impartial pursuit of knowledge. Haldane acknowledged this many years ago, and the application of his famous Principle, by which governments did not interfere in scientific policy-making, was spectacularly successful for decades. Science is global, of course, and until relatively recently policies of non-interference flourished everywhere. The result was an abundance of unpredicted transformational discoveries, including DNA structure, the genetic code, holography, the laser, magnetic resonance imaging, almost all of which came from academic research. These discoveries also stimulated unprecedented economic growth. Earlier this year, some of us wrote to THE (12 February 2009) expressing our concern with the new requirement. We urged peer reviewers to stage a “modest revolt” by declining invitations to take potential economic impact into consideration, confining their assessments to matters in which they are demonstrably competent. Our correspondence indicates that many more supported our recommendation than would publicly admit. Researchers are concerned that participation in such a revolt might damage careers. However, by Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

194

APPENDIX 1

195

way of further encouragement, we would draw attention to the Russell Group’s statement (RCUK consultation on the efficiency and effectiveness of peer review, January 2007): There is no evidence to date of any rigorous way of measuring economic impact other than in the very broadest of terms and outputs. It is therefore extremely difficult to see how such Panel members (those expert in the economic impact of research) could be identified or the basis upon which they would be expected to make their observations. Without such a rigorous and accepted methodology, this proposal could do more harm than good.

This opinion from a body comprising the UK’s leading research universities is a damning indictment. We the undersigned seek to persuade the research councils that their policies on potential impact are ill advised and should be withdrawn. The research councils are, of course, striving to ensure continued public support and government funding for research. However, while UK academic research has substantial economic potential, hobbling it with arbitrary constraints is counterproductive. We urge, therefore, that the research councils find scientific ways of convincing the public and politicians that fostering academic freedom offers by far the best value for taxpayers’ money and the highest prospects for economic growth. Donald W Braben, UCL, and the following who also sign in a personal capacity: John F Allen, Queen Mary, University of London; William Amos, University of Cambridge; Michael Ashburner FRS, University of Cambridge; Jonathan Ashmore FRS, UCL; Tim Birkhead FRS, University of Sheffield; Mark S Bretscher FRS, MRC Laboratory of Molecular Biology, Cambridge; Peter Cameron, Queen Mary, University of London; Richard S Clymo, Queen Mary, University of London; Richard Cogdell FRS, University of Glasgow; David Colquhoun FRS, UCL; Adam Curtis, Glasgow University; John Dainton FRS, University of Liverpool; Michael Fisher, University of Liverpool; Leslie Ann Goldberg, University of Liverpool; Pat Heslop-Harrison, University of Leicester; Dudley Herschbach, Harvard University, Nobel Laureate;

196

APPENDIX 1

H Robert Horvitz FRS, MIT, Nobel Laureate; Sir Tim Hunt FRS, Cancer Research UK, Nobel Laureate; Herbert Huppert FRS, University of Cambridge; H Jeff Kimble, Caltech, US National Academy of Sciences; Sir Aaron Klug FRS, MRC Laboratory of Molecular Biology, Cambridge, Nobel Laureate; Roger Kornberg FRS, Stanford University, Nobel Laureate; Sir Harry Kroto FRS, Florida State University, Tallahassee, Nobel Laureate; Michael F Land FRS, University of Sussex; Peter Lawrence FRS, MRC Laboratory of Molecular Biology, Cambridge; Angus MacIntyre FRS, Queen Mary, University of London; Sotiris Missailidis, Open University; Philip Moriarty, University of Nottingham; Andrew Oswald, University of Warwick; Lawrence Paulson, University of Cambridge; Douglas Randall, University of Missouri, US National Science Board member; David Ray, BioAstral Limited; Venki Ramakrishnan FRS, MRC Laboratory of Molecular Biology, Cambridge, Nobel Laureate; Guy P Richardson FRS, University of Sussex; Sir Richard J Roberts FRS, New England Biolabs, Nobel Laureate; Ian Russell FRS, University of Sussex; Ken Seddon, Queen’s University of Belfast; Steve Sparks FRS, University of Bristol; Sir John Sulston FRS, University of Manchester, Nobel Laureate; Harry Swinney, University of Texas, US National Academy of Sciences; Iain Stewart, University of Durham; Claudio Vita-Finzi, Natural History Museum; David Walker FRS, University of Sheffield; Eric F Wieschaus, Princeton University, Nobel Laureate; Glynn Winskel, University of Cambridge; Lewis Wolpert FRS, UCL; Phil Woodruff FRS, University of Warwick.

Appendix 2: Global Warming: A Coherent Approach

The atmosphere is host to a wide range of complex processes that a large number of climate modelers worldwide are struggling to understand. However, it is not an isolated entity. One boundary is in intimate contact with a similarly complex but much larger body—the Earth—that is about a million times more massive. The other separates us—humanity—from the rest of the universe. Thus arises one of the twenty-first century’s most baffling questions: Are our activities* affecting the planet? The atmosphere’s first terrestrial contact is largely with the oceans—300 times more massive and covering about two-thirds of the Earth’s surface. Transport of heat, water, gases, and particulates across the ocean-atmosphere interface plays a key role in the global climate system. The oceans transport huge quantities of heat—from the equator to the poles, for example—and play crucial roles in the global carbon cycle. Biological affects are also important. Marine microbes are responsible for about a half of the Earth’s production of carbon and nutrients such as phosphorus and nitrogen, and their cycling can affect atmospheric concentrations of CO2, the most well known of the greenhouse gases. As a further illustration of these complexities, in 2006 it was

* An earlier version of this appendix was published in Donald W. Braben, Scientific Freedom: The Elixir of Civilization, Wiley 2008, p. 12.

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

197

198

APPENDIX 2

unexpectedly discovered that the migration of marine animals—the movements of large numbers of whales, for example—might make important contributions to global warming (Kunze et al. 2006). Biologically generated turbulence can significantly affect the transport of heat and nutrients, and air-sea gas exchanges. We understand few of these complex processes. Thermal processes within the Earth itself are highly complex. The Earth’s magnetic field originates in convection currents within the outer (molten iron) core. The radius of the inner, solid-iron core is probably increasing. Thus, there are liquid–solid phase transitions that necessarily involve the exchange of large amounts of latent heat of which we understand very little. The temperature of the core, for example, cannot be measured directly and is uncertain to about 1000°C. The Earth’s magnetic field has declined about 20% in the last hundred years, and it is possible that we are seeing the prelude to one of the periodic magnetic-pole flips that occur on average about every million years. The atmosphere is therefore being subject to unknown but probably substantial fluctuations in its heat exchanges with the Earth. Their time scale may be long, perhaps longer than the effects we see in the atmosphere, but we simply do not know. For the atmosphere’s upper boundary: the Sun also makes minor changes to its long-term output (years to decades: the ∼11 year solar sunspot cycle, for example) that we do not fully understand. Indeed, it is possible that these solar fluctuations (Friis-Christensen et al. 1991) alone can account for the observed changes in the Earth’s temperature over the past 100 years. Furthermore, the Earth is bathed with solar and galactic cosmic rays that vary unpredictably in intensity; and, to compound these problems, the strength of its protective magnetic field is falling. Cosmic rays can affect the structure of the upper atmosphere, and lead to changes (cloud formation rates, etc.) that in turn can affect climate. No doubt there are other problems of which we are as yet unaware. The processes that determine the atmosphere’s behavior are highly nonlinear; that is, variations are very complex and are only rarely related simply and predictably to other events. Feedback mechanisms are essential to stability. As the Earth warms, for example, evaporation of water vapor (another greenhouse gas) should increase, thereby increasing global warming, which should also increase the rate of cloud formation. Clouds increase the Earth’s reflectivity and, hence, reduce the Sun’s heat arriving at the surface. But the delicate balances involved in these processes are unknown, as are their relative importance compared with ice-cap melting, say, that should decrease reflectivity. There are in fact a multitude of such feedback mechanisms, and most if not all of them are poorly understood. However, climate modelers are making heroic attempts to tame these complexities and make predictions, but computer models can only be as good as the data they use and the assumptions made in using them. The Ozone Hole, for example, the seasonal reduction in the capacity of the Antarctic atmo-

APPENDIX 2

199

sphere to absorb harmful ultraviolet radiation, was discovered by the British Antarctic Survey in 1985. Joe Farman and his colleagues suspected that computers had been programmed automatically to reject vital data that, in the interests of expediency, consensus had deemed to be of no interest (Farman 1985). Their suspicions were confirmed, and their more complete analysis revealed the infamous Hole. They also helped instigate the programs to reduce such atmospheric pollutants as chlorofluorocarbons that were destroying the ozone. The causes of the Little Ice Age that seriously affected Europe in particular from about 1600 to 1800 are still not understood.* However, coincidentally or not, during the seventeenth Century the Sun was virtually without spots for about 70 years, an observation that has bolstered research on the theoretical underpinning of the solar cycle on Earth’s climate. In addition, long-term predictions of nonlinear systems can be fraught with danger. Although there are many models of world economic performance, for example, and human interactions are reasonably well understood, how many modelers would dare to predict inflation levels, say, in 2050 and insist on being taken seriously? It was reported in 2012 that unlike the dramatic losses reported in the Artic, the Antarctic sea-ice cover has increased. Data from the Jet Propulsion Laboratory using over 5 million individual daily ice motion measurements over a period of 19 years (1990–2009) by four US Defense Meteorological satellites show, for the first time, the long-term changes in sea ice drift around Antarctica. Paul Holland of the British Antarctic Survey comments (Holland and Kwok 2012): Until now these changes in ice drift were only speculated upon, using computer models of Antarctic winds. This study of direct satellite observations shows the complexity of climate change. The total Antarctic sea-ice cover is increasing slowly, but individual regions are actually experiencing much larger gains and losses that are almost offsetting each other overall. We now know that these regional changes are caused by changes in the winds, which in turn affect the ice cover through changes in both ice drift and air temperature. The changes in ice drift also suggest large changes in the ocean surrounding Antarctica, which is very sensitive to the cold and salty water produced by sea-ice growth. Sea ice is constantly on the move; around Antarctica the ice is blown away from the continent by strong northward winds. Since 1992 this ice drift has changed. In some areas the export of ice away from Antarctica has doubled, while in others it has decreased significantly. .  .  . The Arctic has experienced dramatic ice losses in recent decades while the overall ice extent in the Antarctic has increased slightly. However, this small Antarctic increase is actually the result of much larger regional increases and decreases, which are now shown to be caused by winddriven changes. * See, for example, the US National Academy’s Workshop Report, “The effects of solar variability on Earth’s climate” (2012).

200

APPENDIX 2

There is an urgent need, therefore, for more coherent data on temperature, density, pressure, and salinity that take all their interrelationships into account, distributed at higher resolution around the globe rather than, as one hears so often today, calling for more powerful computers. Higher resolution means higher investment in equipment, of course, but bearing in mind the many unknown complexities of global warming, the only way to ensure coherent approaches is to encourage unfettered scientific exploration. What we have, unfortunately, are armies of explorers obliged to treat components of this hyper-complex system as though they can be studied in isolation—as if each component were independent of all the others. Furthermore, the Intergovernmental Panel on Climate Change and many other scientists seem to have assembled compelling, comprehensive and objective evidence that humans are changing the climate in ways that threaten our societies and the ecosystems on which we depend. However, under today’s rules on research selection, this very substantial consensus undermines all attempts to challenge it, as if it were already established as indisputable fact. This is simply because it is unlikely that such challenges would survive expert scrutiny from those who may have played a role in forming that consensus or who agree with it, a consensus that a majority of scientists would seem to subscribe to. I am not an advocate for inaction, however. For example, in the possible absence of complete understanding, it is irresponsible to continue injecting large and rising amounts of greenhouse gases such as CO2 into the atmosphere.* They should be reduced as far as possible, of course, or at least not increased until we are reasonably sure that we know what we are doing. But what we should not do is claim that, if humanity takes such actions, the globalwarming problem will be solved. It might not be. This book provides extensive evidence that majority opinion in scientific matters has often been completely wrong. The main causes might therefore still await discovery. It is even possible that it is not a problem we can do anything about.

* An excellent summary of the problems associated with controlling or trading carbon emissions is given in an editorial written by W. H. Schlesinger published in Science 314, 1217 (2006).

References

Abramovitz, M., Am. Econ. Rev. 46, 5 (1956). Andrade, E. N. da C., The Rutherford Memorial Lecture, 1957. The birth of the nuclear atom, Proc. Roy. Soc. Series A, 244, 437–455 (1958). Avery O. T., C. M. MacLeod, and M. McCarty, Studies of the chemical nature of the substance inducing transformation of pneumococcal types, J. Exp. Med. 79, 137 (1944). Berry, M.V., Principles of Cosmology and Gravitation, Adam Hilger, 1991. Boyer, P. D., B. Chance, L. Ernster, P. Mitchell, E. Racker, and E. C. Slater, Oxidative phosphorylation and photophosphorylation, Ann. Rev. Biochem., 46, 955–1026 (1977). Braben, D. W., Pioneering Research: A Risk Worth Taking, Wiley, 2004. Braben, D. W., Scientific Freedom: The Elixir of Civilization, Wiley, 2008. British Antarctic Survey Press Release, Why Antarctic Sea Ice Cover Has Increased under the Effects of Climate Change, referring to a paper by Paul R. Holland and Ron Kwok, Wind-driven trends in Antarctic sea-ice drift, Nature Geoscience 5, 872–875 November 2012. Bronowski, J., The Ascent of Man, Macdonald Futura, 1981. Brown, S. C., Count Rumford Physicist Extraordinary, Doubleday and Company, 1962. Buderi, R., The Invention That Changed the World, Abacus, 1996. Christianson, G. E., In the Presence of the Creator, Free Press, 1984. Clark, R. W., Einstein: The Life and Times, Hodder and Stoughton, 1982. Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

201

202

REFERENCES

Collins, F., The heritage of humanity, Nature S1, 9–12 (2006). Comfort, N. C., The Tangled Field, Harvard University Press, 2003. Crick, F., Central dogma of molecular biology, Nature 227, 561–563 (1970). Curl, R. F., and W. D. Gwinn, Kenneth S. Pitzer, J. of Phys. Chem. 90 (20), 7743–7753 (1990). Dai, L., Carbon Nanotechnology: Recent Developments in Chemistry, Physics, Materials Science, and Device Applications, Elsevier BV, 2006. Denison, E., Trends in American Economic Growth, 1929–1982, Brookings Institution, 1985. Dubos, R. J., The Professor, the Institute and DNA, The Rockefeller University Press, 1976. Einstein, A., Zur quantentheorie der strahlung (On the quantum theory of radiation), Physica Zeitschrift 18, 121 (1917). ENCODE Project Consortium, An integrated encyclopedia of DNA elements in the human genome, Nature 489, 57–74 (2012). Enz, C. P, No Time to Be Brief, Oxford University Press, 2002. Erlichson, H., Sadi Carnot, founder of the second law of thermodynamics, European Journal of Physics, 20, 183 (1999). Faraday, M. Introduction, by John Tyndall in Experimental Researches in Electricity, J. M. Dent & Sons. Farman, J. C., B. G. Gardiner, and J. D. Shanklin, Large losses of total ozone in Antarctica reveal seasonal ClOx/NOx interaction, Nature 315, 207–210 (1985). Fedoroff, N. V., Biogr. Mems. Fell. R. Soc. 40, 266–228 (1994). Fedoroff, N. V., Two women geneticists, Am. Scholar 65, 587–592 (1996). Flamm, D., Ludwig Boltzmann: A Pioneer of Modern Physics, accessed in 1997, arXiv:physics/9710007v1. Fleming, A., Penicillin, Nobel Prize lecture, December 11, 1945. Friis-Christensen, E., and K. Lassen, Length of the solar cycle: An indicator of solar activity closely associated with climate, Science 254, 698–700 (1991). Fox, G. E., L. J. Magrum, W. E. Balch, R. S. Wolfe, and C. R. Woese, Classification of methanogenic bacteria by 16S ribosomal RNA characterization, Proc. Natl. Acad. Sci. 74, 4537–4541 (1977). Geim, A., Random walk to graphene, Nobel Prize lecture. Nobelprize.org. Nobel Media AB 2013. Gilbert, W., The RNA world, Nature 319, 618 (1986). Graur, D., Y. Zheng, N. Price, R. B. R. Azevedo, R. A. Zufall, and E. Elhaik, Genome biology and evolution advance access, Feb. 20, 2013, DOI:10.1093/gbe/evt028. Harrison, E., Darkness at Night, Harvard University Press, 1987. Hartley, H., Humphry Davy, EP Publishing, 1972. Heilbron, J. L., The Dilemmas of an Upright Man, University of California Press, 1986. Hershey, A., and M. Chase, Independent functions of viral protein and nucleic acid in growth of bacteriophage, J. Gen. Physiol. 36, 39 (1952). HMPC (The Human Microbiome Project Consortium), Structure, function and diversity of the healthy human microbiome, Nature 486, 207–221 (2012).

REFERENCES

203

Hobbes, T., Leviathan, 1651, chapter 13. Iijima, S., Helical nanotubes of graphitic carbon, Nature, 354, 56 (1991). International Human Genome Sequencing Consortium, Initial sequencing and analysis of the human genome, Nature 409 (6822), 860–921 (2001). Kass, L. B., Missouri compromise: Tenure or freedom. New evidence clarifies why Barbara McClintock left Academe, Maize Genetics Cooperation Newsletter 79, 52–71 (2005). Kroto, H. W., J. R. Heath , S. C. O’Brien, R. F. Curl, and R. E. Smalley, C60: Buckminsterfullerene, Nature 318 (November 14, 1985). Kunze, E., J. F. Bower, I. Beveridge, R. Dewey, and K.P. Bartlett, Observations of biologically generated turbulence in a coastal inlet, Science 313, 1768–1770 (2006). Lane, N., Oxygen: The Molecule That Made the World, Oxford University Press, 2002. Lane, N., Power, Sex, Suicide: Mitochondria and the Meaning of Life, Oxford University Press, 2005. Lane, N., Life Ascending, Profile Books, 2009. Lane, N., and W. Martin, The energetics of genome complexity, Nature 467, 929–934 (2010). Larson, P. O., and M. von Ins, Scientometrics 84, 575 (2010). Lecky, W. E. H., Democracy and Liberty, Longmans, Green and Co., 1899. Longair, M., The Cosmic Century, Cambridge University Press, 2006. Lovelock, J., The Vanishing Face of Gaia: A Final Warning, Penguin, 2010. Maddison, A., Monitoring the World Economy: 1820–1992, Organization for Economic Co-operation and Development, 1998. Maddison, A., The World Economy: A Millennium Perspective, Organization for Economic Co-operation and Development, 2006. Madigan, M. T., and B. L. Marrs, Extremophiles, Sci. Am., April 1997, 82-87 (1997). Mattick, J. S., Introns: Evolution and function, Curr. Opin. Genet. Dev. 4, 823–831 (1994). Mattick, J. S., Challenging the dogma: The hidden layer of non-protein-coding RNAs in complex organisms, BioEssays 25.10, 930–939 (2003). Mattick, J. S., Deconstructing the dogma, Ann. NY Acad. Sci., 1178, 29–46 (2009). Mayr, E., Two empires or three? Proc. Natl. Acad. Sci. 95, 9720–9723 (1998). McClintock, B., Letter to John Fincham, a biologist at the University of Leeds, May 16, 1973. Meitner, L., and O. R. Frisch, Disintegration of uranium by neutrons: A new type of nuclear reaction, Nature 143, 239–240 (1939). Mitchell, P., Coupling of phosphorylation to electron and hydrogen transfer by a chemiosmotic type of mechanism. Nature, 191, 145–148 (1961). Monoclonal antibodies, The Economist, August 10, 2000. Morell, V., Microbial biology: Microbiology’s scarred revolutionary, Science 276, 699– 702 (1997). Morgan L. H., Ancient Society, Belknap Press, 1877, pp. 34–35. Morris, S. C., Life’s Solutions: Inevitable Humans in a Lonely Universe, Cambridge University Press, 2003.

204

REFERENCES

The Nobel Foundation (2012). Sir Harold Kroto—Biographical, http://www.nobelprize .org/nobel_prizes/chemistry/laureates/1996/kroto-bio.html. Noe, C. R., and A. Bader, Facts are better than dreams, Chem. in Britain (February 1993). Odling-Smee, L., Darwin and the 20-year publication gap, Nature, 446, 478-479 (2007). Orgel, L., Are you serious, Dr. Mitchell? Nature 402, 17 (1999). Pais, A., Niels Bohr’s Times: In Physics Philosophy and Polity, Clarendon Press, 1993. Perutz, M., Bragg, protein crystallography and the Cavendish laboratory, Acta. Cryst. A26, 183–185 (1970). Planck, M., Preface, by Albert Einstein, Where Is Science Going?, Oxbow Press, 1933. Planck, M., Scientific Autobiography and Other Papers, Williams and Norgate, 1950. Prebble, J., Peter Mitchell and the ox phos wars, Trends Biochem. Sci., 27, 209 (2002). Prebble, J., and B. Weber, Wandering in the Gardens of the Mind: Peter Mitchell and the Making of Glynn, Oxford University Press, 2003. Quinn, S., Marie Curie: A Life, Mandarin, 1995. Racker, E., Reconstitution, mechanism of action, and control of ion pumps, Biochem. Soc. Trans., 3, 785 (1975). RCUKa (Research Councils UK), September 2011. RCUKb (Research Councils UK), May 2012. Realising Our Potential: A Strategy for Science, Engineering and Technology, Cm 2250, 1993. Rivera, M. C., and J. A. Lake, Ring of life provides evidence for a genome fusion origin of eukaryotes, Nature, 431, 152–155 (2004). Rohlfing, E. A., D. M. Cox, and A. Kaldor, Production and characterization of supersonic cluster beams, J. Chem. Phys. 81, 332 (1984). Rutherford, E., The chemical nature of the alpha particles from radioactive substances, Nobel Prize lecture. Nobelprize.org. Nobel Media AB 2013. Schlesinger, W.H., Editorial in Science 314, 1217 (2006). Smalley, R. E., Discovering the fullerenes, Nobel Prize Lecture, 1996. Slater, E. C., Peter Dennis Mitchell, Biogr. Mems. Fell. Roy. Soc. 40, 282 (1992). Solow, R. M., Rev. Econ. Stat. Aug., 312 (1957). Stent, G. S., The DNA double helix and the rise of molecular biology, in The Double Helix, Norton, 1980. Townes, C., How the Laser Happened, Oxford University Press, 1999. The 2011 EU Industrial R&D Investment Scorecard (2011), figure 4, p. 32. United States General Accounting Office report, Cooperative threat reduction: Status of defense conversion efforts in the former Soviet Union, 1997. US National Academy. Workshop Report, The effects of solar variability on Earth’s climate, 2012. van Wyhe, J., Mind the gap: Did Darwin avoid publishing his theory for many years? Notes Rec. Roy. Soc. 22, 61 No. 2, 177–205 (May 2007). Vorzimmer, P., Charles Darwin and blending inheritance, Isis, 54, 371–390 (1963). Waddington, C. H., Tools for Thought, Jonathan Cape, 1977.

REFERENCES

205

Watson, J. D., and F. H. C. Crick, Molecular structure of nucleic acids, Nature, 171, 737–738 (1953). Wilson, R. W., K. B. Jefferts, and A. A. Penzias, Carbon monoxide in the Orion Nebula, Astrophys. J. Lett., 161, L43–L44 (1970). Woese, C., Default taxonomy: Ernst Mayr’s view of the microbial world. Proc. Natl. Acad. Sci. 95, 11043–11046 (1998). Woese, C., A new biology for a new century, Microbiol. Mol. Biol. Rev, 68, 173–186 (2004). Woods, R. A., Biochemical Genetics, Chapman and Hall, 1980. Wordsworth, W., The Prelude (ca. 1800). World Bank, GDP Growth (Annual %), 2012. Zuckerman, B. et al., Detection of interstellar trans ethyl alcohol, Astrophys. J., 196, L99–L102 (1975).

Index

Note: Page numbers in italics refer to figures; those in bold to tables. Abramovitz, Moses,  31 Academic merit. See Peer preview; Peer review Accidents. See Coincidence and accidents, role in scientific discovery Adenosine triphosphate,  129–132, 132, 134, 140, 186 Advancement. See Scientific discovery, research, and advancement; Standard of living Alchemy,  140 Alexander, Anthony,  147 Allen, Eunice,  183 Allocation of resources capitalism,  33, 192 efficiency, perception of,  36 impact and, see Impact, of research research policies,  2 See also Funding; Prediction of outcomes and financial impact Alloway, J.L.,  86

α-particles,  58, 62 Altruism,  35 Ambition,  16 American Revolution,  17–18 Ancient Society (Morgan),  28 Andrade, Edward,  50–51 Annalen der Physik,  47 Ansari family,  184 Anschütz, Richard,  144 Apollo project,  103 Apoptosis,  119, 168, 185 Arabian Nights,  184 Archaea,  120–121, 121, 123, 124 Aristotle,  140 Aston, F.W.,  54 Atomic theory,  79 ATP (adenosine triphosphate),  129–132, 132, 134, 140, 186 ATP synthase,  132, 134, 136 Avery, Oswald T.,  79–88 Dubos on,  83–84 early career,  81–82

Promoting the Planck Club: How defiant youth, irreverent researchers and liberated universities can foster prosperity indefinitely, First Edition. Donald W. Braben. © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

206

207

INDEX

Nobel Committee and,  87–88, 90 opposition faced,  82, 86, 87, 160 personality,  83 photograph of,  84 research and discoveries of,  85–87, 99, 111 research that paved the way for,  79–81 Bacon, Francis,  29 Bacon, Roger,  29 Bacteria, outer layers,  85 Bacteriophages,  95, 111–112 Balmer, Johan,  63 Balmer formula,  63 Basov, Nikolai,  100, 108 Bauer, Hans,  66 Becquerel, Henri,  57 Bede,  29 Bell Labs,  3, 100, 101, 104, 135, 154 Bennett, Mike,  183 Benzene,  143, 144 Berry, Michael,  67, 176–177 Bigelow, Henry J.,  82 Binomial system,  81 Blackbody radiation,  43, 44 Blair, Tony,  163 Bloch, Felix,  104 Bohr, Niels,  60–65 competition and,  26 Delbrück and,  94 early life of,  60–61 Heisenberg and,  74 Meitner and,  90 Nobel Prize,  64 Pauli and,  68, 70 photograph of,  65 research and discoveries of,  51, 61–64, 66, 68, 76 Rutherford and,  60 thesis,  61 Townes and,  106 Boissonade, Jacques,  183 Boltzmann, Ludwig Bronowski on,  47 friends of,  145 gravestone,  46

opposition to,  45–46 Planck on,  45 predictions of,  48 Boltzmann’s constant,  46 Bonaparte, Napoleon,  20, 22 Born, Max,  70–71, 72 Bose, Satyendra Nath,  76 Boulton, Matthew,  19 Bowen, Ira,  101 Boyer, Paul,  134, 136 BP research as author’s employer,  154 vs. freedom and creativity,  155 as funding model,  191 as source of scientific discovery,  3, 101 Venture Research sponsorship,  84, 137, 150, 151, 182–183 Braben, David J.,   182 Braben, Donald W. Earth Sciences, honorary degree,  185 employment with BP,  154 photograph of,  186 Pioneering Research: A Risk Worth Taking,  184 relationship with Kroto,  150 Scientific Freedom: The Elixir of Civilization,  15, 35, 100, 182, 183, 191 See also Venture Research project Bragg, Lawrence,  20, 113–114, 118 Bragg, William,  20 Brenner, Sydney,  118–119, 164, 192 British Technology Group (BTG),  11 Bronowski, Jacob,  47, 80 Brown, Sandborn,  17 Bruno, Giordano,  24 Buckminsterfullerene,  152, 153, 155 “Bucky ball,”  153 Burnham, Charles,  97 Bush, Vannevar,  4–5, 93, 103, 174, 178 Caenorhabditis elegans,  119, 170 Capitalism, effect on research growth,  33, 192 Carbon allotropic (natural) forms,  142 benzene structure,  144 bonds formed by,  142

208 Carbon (continued) ethane molecule,  148 fullerenes,  150, 151–152, 152, 155 graphene,  2D form,  177, 182 nanotubes,  153 overview of,  141–142 radio astronomy and,  146 Carlsberg,  71 Carnegie, Andrew,  3, 15, 93 Carnot, Sadi,  41–43 Cathode radiation,  54–55 Cell,  186 Cellular respiration,  132 Chadwick, James,  60, 70, 76 Chain, Ernst,  13 Change. See Economic growth; Scientific discovery, research, and advancement; Unknown, exploration of Chase, Martha,  111–112 Chaucer, Geoffrey,  29 Chemiosmosis,  186 Chemiosmotic hypothesis,  132, 134, 135 Chen Award,  172 China egalitarianism, x growth in GDP,  181 Human Genome Project,  170 Christiansen, Christian,  61, 62 Christianson, Gale,  140 Chromosomes,  81, 90–91, 92, 95–96, 98, 166 Churchill, Winston,  78, 135, 177 Cicero,  29 Clark, Ronald,  52 Clark, Terry,  183 Classification of organisms,  81, 116, 121, 122. See also Lane, Nick; Mendel, Gregor; Woese, Carl Clausius, Rudolph,  42, 43 Cleese, John,  42 Clinton, William S.,  163 Coalition for Science Funding,  187–188 Coburn, Tom,  188 Cockcroft, John Douglas,  60 Coherent vs. incoherent molecules,  105, 107

INDEX

Coincidence and accidents, role in scientific discovery,  16–26 Davy, Humphry,  17, 19, 21–22 Faraday, Michael,  16–17, 23–24 Oersted, Hans Christian,  23 religious restrictions,  24–25 Thompson, Benjamin career and inventions of,  17–18 early life of,  17 move to Paris,  19–20 Royal Institution of Great Britain,  18–21 See also Prediction of outcomes and financial impact; Unknown, exploration of Cold Spring Harbor Laboratory,  93, 94, 95, 98, 111, 165 Coleridge, Samuel Taylor,  19 Collaboration Hahn and Meitner,  89 Human Genome Project,  170–171 Lovelock and Margulis,  128 McClintock and Randolph,  90 multidimensional chemistry,  138 “ox phos wars,”  135, 136 Swinney-DeKepper collaboration,  138 Venture Research and Univ. College London,  185, 191 Collins, Francis,  170 Competition among colleagues,  53, 61, 74 Bohr and,  61 continued practice of,  189 ecological niches,  123 for funding,  111, 176 justifying research to,  192 vs. Venture Research project,  181 Completeness,  124–125 “Conformational theory,”  134 Contrarians. See Controversy and opposition Controversy and opposition Avery and DNA,  82, 86, 87, 160 current policies,  159 Darwin and evolution,  159–160 Mattick and RNA proposal,  171, 172, 173

INDEX

McClintock and transposons,  92, 97, 160 Mitchell and chemiosmotic theory,  132–135 “ox phos wars,”  133–136 vs. peer preview policies,  174 of religion, on science,  163–164 response to new discoveries,  165–166, 173 Townes and Apollo project,  103 vested interests,  158 Woese and archaea domain,  120–121, 160 Copernicus,  139 Cowan, Clyde,  70 COWDUNG (conventional wisdom of the dominant group),  141 Creativity. See Freedom and creativity; Scientific discovery, research, and advancement Crick, Francis,  112–114, 161 Curie, Marie,  75, 89 Curie, Pierre,  89 Curl, Robert,  147–148, 149, 153 C-value paradox,  170 Dale, Henry,  4–5, 20, 87, 174, 178 Daneholt, Bertil,  162–163 Dark matter and energy,  77, 127 Darwin, Charles,  24–25, 79–80, 159–160 Darwin, Charles Galton,  62 Darwin Digital Library of Evolution,  160 Davies, Steve,  183 Davy, Humphry career and discoveries of,  21–22 Faraday and,  17 Royal Institution of Great Britain,  19, 20–21 Davy Faraday Research Laboratory,  20 Dawson, M.H.,  85 de Broglie, Louis,  71–72, 75, 76 DeKepper, Patrick,  138, 183 Delbrück, Max,  94–95 Demerec, Milislav,  93, 94, 95 Deneubourg, Jean Louis,  183

209 Denison, Edward,  32 Deoxyribonucleic acid (DNA). See DNA (deoxyribonucleic acid) Dirac, Paul,  72, 74, 75, 76 Discovery. See Scientific discovery, research, and advancement DNA (deoxyribonucleic acid) carbon and,  142 in chromosome,  92 controversy over,  160 dogma of,  160–163, 164 endosymbiosis and,  186–187 fingerprint of,  176 gene activity proposal,  169 isolation of, first,  86 “junk” DNA,  168–169 replication,  129 structure of,  113, 166, 167 viruses and,  112 Watson and Crick,  112–114 Dogma, in biology,  160–161, 161 Dubos, René,  82, 83–84, 87 Earth atmosphere, as shield,  126 life on,  127 as sentient organism,  127–128 Ecological niches,  123 Economic growth,  27–37 capitalism,  33, 192 consequences of,  35 decoupling discovery from,  192 dependence on unknown,  175 free trade,  35 in GDP terms,  30 Industrial Revolution,  29–30 influences on,  33–34 maser-laser technology and,  108 origins of,  31–33 population growth rates,  28 quantum mechanics effect on,  75 risk management and, see Risk science and technology, dependence on,  180 standard of living, see Standard of living technical change and,  32

210 Economic growth (continued) unknown, dependence on,  159, 175 See also Scientific discovery, research, and advancement Education hands-on nature of,  143, 144 of public by industry,  101 Edward VII,  20 Efficiency and productivity,  35–36 changes in,  34 vs. freedom and creativity,  154–155 impact and, see Impact, of research as policy component,  33–34, 77–78, see also Policies proposals and, see Proposals Einstein, Albert on drive,  22 on Mach,  66 Nobel Prize,  71 on Pauli,  66 photograph of,  75 on Planck,  38–39, 48 publication,  47 research and discoveries of,  8, 51, 76 research that paved the way for,  24 Townes and,  101, 105, 108 youth of,  52 Electromagnetic field discovery,  23–24 Electromagnetic waves,  39 Electron transfer,  132 Elementary quantum of action,  47 Encyclopedia of DNA Elements (ENCODE),  171, 172, 173 Endosymbiosis,  186 Engineering and Physical Sciences Research Council (EPSRC) Geim on,  177 Ideas Factory,  14 Kroto and,  155–156 Manufacturing the Future initiative,  190 policies of,  189–190 priority areas of,  178 proposal success rates,  190 rejection of Seddon’s proposal,  182 England. See United Kingdom Entropy,  41, 43, 44–45, 49 Enz, Charles,  65

INDEX

Ethane,  148 Eukaryotic genes,  162, 164, 186–187 European Research Council,  179 Evolution of life on Earth,  127 nuclear vs. mitochondrial genomes,  186 RNA’s role in,  167–168, 171–172 Evolutionary classification of organisms,  81, 116, 121, 122. See also Lane, Nick; Mendel, Gregor; Woese, Carl Exclusion Principle,  68, 72 Exons and introns,  162, 163, 164, 165, 166–168 Extremophiles,  123 Exxon laboratory,  150 Fairclough, John,  183 Faraday, Michael career, start of,  22 early life of,  16–17 electromagnetic field discovery,  23–24 influence of and inspiration,  39 research and discoveries of,  143, 148 Fedoroff, Nina,  91–92, 94, 96–97 Fellows program, IBM,  154 Fermi, Enrico,  70, 76 Feynman, Richard,  25, 154 Financial crises, role of science and technology in,  175 First Law of Thermodynamics,  40 Fission, nuclear,  90 Flamm, Dieter,  46 Fleming, Alexander,  12–13 Florey, Howard,  13 France Carnot and,  41 growth in GDP,  181 Human Genome Project,  170 Napoleon Prize,  21–22 Franks, Nigel,  183 Free trade, growth and,  35 Freedom and creativity vs. academic posts,  164, 180, 189 Carnegie Institute and,  93

INDEX

COWDUNG (conventional wisdom of the dominant group),  141 encouragement of,  49, 51, 179 to follow intuition,  64 Kroto and,  145, 146 McClintock and,  93, 94, 98 Mitchell and,  136–137 from Navy, in maser-laser research,  108–109 vs. peer preview,  114, see also Peer preview vs. proposals, see Proposals vs. research policies,  6–7, 9, 191 restoration of,  36–37, 190 Rockefeller’s insistence on,  83 in scientific research,  5, 8 support for,  135 in United States,  187–188 University of London College initiative,  185 Venture Research project,  84, 141, 181 Woese and,  124–125 See also Funding; Policies; Scientific discovery, research, and advancement Frisch, Otto,  90 Fuller, Richard Buckminster,  152 Fullerenes,  151–152, 155 Funding administrative roles in,  189 career dedication of scientists,  125 competition for,  61, 74, 111 credit, for work,  60 decoupling discovery from,  192 Harvard University,  8 for Heisenberg,  71 Human Genome Project,  170 impact and, see Impact, of research Kroto and the EPSRC,  155–156 management of,  180–181 Medical Research Laboratory philosophy,  119 from NASA,  101, 103, 116, 117, 128 overview of issue,  3, 110–111 as percentage of GDP, by country,  7 Planck and,  48 politicians and,  25 private sources,  15, 118

211 responsibility for,  13 risk management and, see Risk Rutherford and,  56–57 socioeconomic potential and,  88, 174, 175, 176, 177–178, 179, 188 solutions to,  189 taxpayer support of,  3, 9, 11, 17, 25, 179, 187, 188 technology-driven initiatives,  133 in United Kingdom,  188 in United States,  187–188 Venture Research project,  141, 183–184 Woese and,  124 See also Allocation of resources Galileo,  139 Garnet, Thomas,  19 Gates, Bill and Melinda,  15, 191 Gates, Frederick Taylor,  82, 83 GE,  154 Geiger, Hans,  58, 60 Geim, Andre,  176–177 Gene expression,  91 General Theory of Relativity. See Einstein, Albert; Theory of Relativity Genome regulation,  158–173 DNA structure,  166, 167 dogma of,  160–163, 161 gene activity proposal, Mattick,  169 Human Genome Project,  170–171 Lane, Nick on,  185–187 Mattick, on Sharps and Roberts discovery,  165–166 nuclear vs. mitochondrial,  185–187 prokaryotic size constraints,  186 religion vs. science,  163–164 George III,  18 Germany founding of,  39 growth in GDP,  181 Human Genome Project,  170 McClintock and,  91 Nobel Prizes awarded to,  188 refugees from,  90, 118, 142 Gilbert, Walter,  162, 166, 167–168, 172 Gladstone, William,  23–24

212 Glover, Anne,  183 Glynn Research Institute,  133, 134, 137 Golden Age of Physics,  50–78 Bohr, Niels,  60–65, 65 discoveries made by youth,  76 Heisenberg, Werner,  70–75 Pauli, Wolfgang Ernst,  65–70 Rutherford, Ernest,  56–60 Solvay Conference 1927, 75 Thomson, Joseph John,  53–56, 56 Gordon, Jim,  106 Goss, Simon,  183 Goudsmith, Samuel,  68, 76 Governance. See Policies Grant, Malcolm,  185 Grants. See Funding; Proposals Graphene,  177, 182 Graur, Dan,  173 Gravitational waves,  67, 126 Great Britain. See United Kingdom Green chemistry,  151 Griffith, Fred,  85, 86, 87 Gross, David,  77 Gross domestic expenditures on R&D,  7 Gross domestic product (GDP) per capita,  30, 181 Ground state, electrons,  63, 105 Growth. See Economic growth; Scientific discovery, research, and advancement Gulliver’s Travels (Swift),  93 Hahn, Otto Meitner and,  89, 90 Nobel Prize,  89, 90 research and discoveries of,  57, 70 Rutherford and,  57 Hayek, F.A.,  4, 32 Heath, Jim,  153, 155 Heisenberg, Werner,  70–75 Bohr and,  74 Nobel Prize,  74 photograph of,  73, 75 research and discoveries of,  51, 70–74, 76 thesis,  70 Uncertainty Principle,  72, 106, 107 Helmholtz, Hermann von,  39–40, 49

INDEX

Hemoglobin,  118 Heredity,  79–81 Herschbach, Dudley,  ix–xi Hershey, Alfred,  111–112 Hertz, Heinrich,  54 Heslop-Harrison, Pat,  183 High Quality Research Act,  188 Histones,  166 HMS Beagle,  80, 159 Hoagland, C.N.,  81 Hobbes, Thomas,  29 Holbrey, John,  182 Homer,  29 Homo sapiens C-value paradox,  170 energy generation of,  136 generations of,  80 genes and microbes of,  124, 170 niches of,  28, 123 Hopkins, Johns,  82 Horsthemke, Werner,  183 Horvitz, H. Robert,  192 Hughes, Howard,  3 Hulst, Hendrik C. van de,  69 Human Genome Project,  170–171, 172 Human Microbiome Project,  124 Hunt, Tim,  11 Huppert, Herbert,  183 IBM,  154 Ig Nobel Prize,  176–177 Ignorance blackbody radiation,  44 caloric theory,  41 completeness of,  127, 191 Feynman on,  25 inheritance,  80 role of, in scientific progress,  5, 6, 8, 49 Iijima, Sumio,  153 Impact, of research Brenner and,  119 EPSRC and,  189 focus on, vi,  10–15, 179 funding, see Funding Holbrey and,  182 Jeffreys and,  176 journal status, for publication,  97 Kroto and,  147

213

INDEX

Mitchell and,  136 open letter to, Times Higher Education,  194–195 predictable future notion,  176 Seddon and,  182 Sulston on,  192 See also Allocation of resources; Efficiency and productivity; Proposals Incoherent vs. coherent molecules,  105, 107 The Independent,  177 Industrial Revolution,  29–30, 31, 41, 127 Industry chemical defense program,  130 genetic research,  98, see also McClintock, Barbara goals of,  36 pharmaceuticals and Kendrew-Perutz research,  178 policies and,  154, see also Policies research trends in,  3 responsibilities of,  190 as source of scientific discovery,  3 Infectious disease progress,  84, See also Avery, Oswald T. Inquiry into the Nature and Causes of the Wealth of Nations (Smith),  35 International Sequencing Consortium,  171 Introns and exons,  162, 163, 164, 165, 166–168 Investors, private. See Funding Ionic Liquids,  182 Irreversible processes,  42–43 Isotopes,  62 Israel funding, as percentage of GDP,  7, 188 papers published,  8 Jansky, Karl,  101, 102 Jefferts, K.B.,  146 Jeffreys, Alex,  176 Jenkins, Simon,  29 Johnson, Samuel,  179 Jolly, Phillip von,  39

Jordan, Pascual,  72 Jumping genes. See Transposons “Junk” DNA,  168–169 Kaldor, Andy,  150, 155 Keen, Nigel,  183 Keilin, David,  131 Kekulé, August,  144–145 Kelly, Marvin,  102 Kelvin, Lord. See Thomson, William Kendrew, John,  112–113, 118, 178 Kepler,  139 Keynes, John Maynard,  4, 139–140 Kilgore, Harley,  5 Kimble, Jeff,  183 King Shahriyar (character),  185 Kirchhoff, Gustav,  39–40, 43, 49 Knudsen, Martin,  62, 75 Koch, Robert,  82 Köhler, Georges,  11 Kohn, David,  160 Kroto, Harry,  vi, 139–157 benzene and,  144–145 benzene structure,  144 Buckminsterfullerene,  152 carbon research and,  141–142 on chemistry infrastructure in UK,  156–157 education of,  142–144 laser spectroscopy,  149–150 Nobel Prize,  152, 156 photograph of,  153 relationship with author,  150 research and discoveries of,  145–152 research that paved the way for,  139–142 Royal Institution of Great Britain,  21 technology resulting from discoveries,  152–153 Venture Research project,  151 Kuhn, Werner,  68, 69 Kusch, Polykarp,  100, 105 Kuznets, Simon,  35 Lake, J.A.,  122 Laminar flow, liquids,  70 Lane, Nick,  185–187, 186

214 Laser astronomy and,  146 laser spectroscopy,  149–150 research leading up to,  100, 107–108 Lauterbur, Paul,  104 Levy, Donald H.,  149 “Licensing” of scientists, proposal,  189 Light, wave particle dualism,  71 Linnaeus, Carl,  81 Lobar pneumonia,  84, 85, See also Avery, Oswald T. Lohmann, Karl,  129 Loschmidt, Josef,  144 Lovell, Bernard,  103 Lovell Telescope,  103 Lovelock, James,  128, 129 Luria, Salvador,  95, 111, 120 Lycopodium powder,  143 Macdonald, Augustine,  59 Macdonald, William,  57, 59 Mach, Ernst,  45, 65, 67 Mach’s Principle,  66 MacLeod, Colin,  86 Maddison, Angus,  29, 32–34, 36, 181 Maiman, Theodore,  107 Maize cytogenetics,  90, 94, 95–96 Mansfield, Peter,  104 Manufacturing the Future initiative (EPSRC),  190 Marconi, Guglielmo,  57 Margulis, Lynn,  128 Marsden, Ernest,  58, 60 Martin, Bill,  186 Maser,  100, 106, 107, 146 Mattick, John,  158–173 career of,  164 Chen Award,  172 controversy, in science,  158–160 DNA structure,  166, 167 dogma, in genome regulation,  160– 163, 161 gene activity proposal,  169, 171–172 Human Genome Project,  170–171 photograph of,  172

INDEX

religion vs. science,  163–164 on Sharps and Roberts discovery,  165–166 Mavericks. See Controversy and opposition; Unknown, exploration of Maxwell, James Clerk Planck and,  39, 44 research that paved the way for,  24 Thomson and,  53, 54, 55 Townes and,  101 Maxwell-Boltzmann energy distribution law,  105, 106 Mayr, Ernst,  120–121, 122 McCarty, Maclyn,  86 McClintock, Barbara,  89–98 chromosome,  92 Fedoroff on,  91–92, 94 on gene philosophy,  164–165 Nobel Prize,  91, 98 opposition faced,  92, 97, 160 personality,  91–92, 93 photograph of,  96 research and discoveries of,  90–92, 95–97, 99, 163, 169 research that paved the way for,  89–90 thesis,  90 McGill, James,  59 Medawar, Peter,  111–112 Medical Research Council (MRC) Laboratory,  11, 118–120, 128, 176 Meitner, Lise,  70, 89, 94 Mellanby, Edward,  118 Mendel, Gregor,  79–80, 96 Merck, Marie,  40 Merit review. See Peer preview Merrison, Alex,  135 Mesoscopic physics,  176–177 Messenger RNA (mRNA),  117 Methanogens,  117, 120 MicroRNAs,  168 Microwave technology,  100, 104, 145–146 Military technology. See Standard of living; Townes, Charles Millikan, Robert,  101 Milstein, César,  11

INDEX

Mitchell, Christopher,  133 Mitchell, Peter,  126–138 approach to research,  132 ATP molecule,  132 challenges,  141 early life of,  129 Nobel Prize,  133, 135 opposition faced,  132–135 photograph of,  131, 134, 137 research and discoveries of,  130–135, 163 research that paved the way for,  126–129 technology resulting from discoveries,  136 thesis,  130 Mittag-Leffler, Gustav,  89 Mojave Aerospace Ventures team,  184 Molecules, coherent vs. incoherent,  105, 107 Mond, Ludwig,  20 Monod, Jacques,  80, 162 Moore, Gordon, E.,  8 Moore’s Law,  8 Moran, Laurence,  173 Morell, Virginia,  120–121 Morgan, Lewis,  28 Morris, Simon Conway,  164 Moseley, Harry,  64–65, 71 Moyle, Jennifer,  130, 131, 133, 134 Müller, Hermann,  39 Mutation,  81, 96, 97 Myoglobin,  118 Nanotubes,  153 National Aeronautics and Space Administration (NASA),  101, 103, 116, 117, 128 National Enterprise Board. See British Technology Group (BTG) National Graphene Institute,  177 National Human Genome Research Institute (NHGRI),  170, 171 National Institutes of Health (NIH),  74, 124, 163, 170 National Research Council,  145, 147, 178 National Research Development Corporation (NRDC),  11

215 National Science Foundation (NSF), x,  5, 187–188 Nature,  152, 160, 163, 170, 186 Neutrinos,  70, 126 Neutrons,  70 Newton, Hannah,  16 Newton, Isaac,  16, 53, 55, 66, 139–140 Newton’s laws of motion,  42 Niches,  123 Nicholson, John,  62–63 Nitrogenous bases,  167 Nobel Committee and Prize Avery and,  87–88, 160 Basov, Prokhorov, Townes (1964),  100 Bloch and Purcell (1952),  104 Bohr (1922),  64 Bragg (1915),  113 Brenner, Horvitz, Sulston (2002),  192 Chadwick (1935),  70 Chain, Fleming, Florey (1945),  13 Crick, Watson, Wilkins (1962),  113 Curl, Kroto, Smalley (1996),  152 Dale, Henry (1936),  5 de Broglie (1929),  72 Einstein (1921),  71 Fellows program, IBM,  154 Geim and Novoselov (2010),  177 Hahn (1944),  89, 90 Hayek (1974),  4 Heisenberg (1932),  74 Kendrew and Perutz (1962),  113 Köhler, Georges (1984),  11 Kuznets (1971),  35 Lauterbur and Mansfield (2003),  104 McClintock (1983),  91, 98 Medical Research Laboratory,  119 Millikan (1923),  101 Milstein, César (1984),  11 Mitchell (1978),  133, 135 Monod (1965),  80 Pauli (1945),  68 Pauling (1954),  74 Pauling (1962),  101 percentage of prizes awarded by country,  188 Planck (1918),  45 Planck Club and,  174 Purcell, Ed,  104

216 Nobel Committee and Prize (continued) Rabi (1944),  104 Raman (1930),  146 Rayleigh and Ramsey (1904),  20 Reines and Cowan (1995),  70 Roberts and Sharp (1993),  162 Röntgen (1901),  57 Rutherford (1908),  57 Sanger (1980),  117 Schawlow (1981),  100 Schrödinger and Dirac (1933),  74 Solow (1987),  32 Thomson (1906),  55 Venture Research project and,  184 women and,  89 Nonconformists. See Controversy and opposition Novartis Medal,  134 Novoselov, Kostantin,  176–177 Nuclear fission,  90 O’Brien, Sean,  152, 153, 155 Oersted, Hans Christian,  23 Ohno, Susumu,  169 Olbers, Heinrich,  6 On the Origin of Species (Darwin),  80 Oort, Jan,  69 Oppenheimer, J. Robert,  101 Organization for Economic Co-operation and Development (OECD),  33 Orgel, Leslie,  133 Origin of Species (Darwin),  160 Osler, William,  82, 83 Ostwald, Wilhelm,  45, 46 “Ox phos wars,”  133, 134, 135, 136 Oxidative phosphorylation,  130–131, 133 Oxygen,  127 Paradigm shifts. See Revolutionary ideas Parkhouse, Graham,  183 Pascheles, Wolfgang Joseph. See Pauli, Wolfgang Joseph Pasteur, Louis,  82 Patenting,  11, 19, 52, 136, 171 Pathways to Impact initiative,  10 Paton, Alan,  183 Paul, Wolfgang,  106

INDEX

Pauli, Wolfgang Ernst,  65–70 Bohr and,  68 early life of,  65–66 Nobel Prize,  68 photograph of,  69, 75 research and discoveries of,  51, 76 Pauli, Wolfgang Joseph,  65 Pauling, Linus Bragg, feud with,  114 Nobel Prize,  74 Nobel Prizes,  101, 114 photograph of,  69 research and discoveries of,  74–75, 76 Peer preview (merit review) Avery, on repeating Griffith’s results,  86 COWDUNG (conventional wisdom of the dominant group),  141 defined,  3 Einstein’s Theory of Relativity publication,  48 focus on,  176 vs. freedom and creativity,  114, 174, 178 Kroto and the EPSRC,  155–156 Mitchell and,  133, 137–138 as obstacle to research,  2 Planck and,  45, 47, 49, 111 Townes and,  100, 102–103 in United States,  187–188 value, of process,  3 Woese and,  124 Peer review, by Research Assessment Exercises (RAE),  10. See also Peer preview Penicillin, discovery of,  12–13 Penzias, A.A.,  146 Perutz, Max,  112–113, 118–120, 178 “Phages,”  95, 111–112 “Philosopher’s stone,”  140 Philosophiæ Naturalis Principia Mathematica (Newton),  140 Photoelectric effect,  71 Photosynthetic phosphorylation,  130–131 Physical Review Letters,  107

INDEX

Physics. See Golden Age of Physics; scientists by name Pioneering Research: A Risk Worth Taking (Braben),  184 Pioneers, in research. See Scientific discovery, research, and advancement; Unknown, exploration of Pitzer, Kenneth,  148 Planck, Gottleib,  39 Planck, John Julius Wilhelm von,  39 Planck, Max,  38–49 on atoms,  50, 145 blackbody radiation,  43, 44 Boltzmann,  46, 47 Carnot, influence of,  41–43 constant, elementary quantum of action,  47 education of,  39–40 Einstein, publication of,  47 Einstein on,  38–39, 48 vs. modern research policies,  45, 48, 49 naming of Planck Club,  1 Nobel Prize,  45 opposition faced,  45, 47, 49, 111, 158 photograph of,  40, 75 quantum hypothesis,  71, 99 research and discoveries of,  8, 163 research that paved the way for,  24 reversibility concept,  42–43 thesis,  40, 41 Where Is Science Going? preface of,  38–39 Planck Club discoveries and,  191 number of members,  174 origin of,  1 See also scientists by name Planck Test,  178 Planck’s constant,  63, 69, 73, 106 Poliakoff, Martyn,  183 Policies vs. creativity and freedom in research,  6–7, 9, 36–37 Boltzmann and,  47 Planck on,  44–45 Townes and,  104 defenders of,  175

217 efficiency and risk,  33–34, 77 of Engineering and Physical Sciences Research Council,  189–190 exploration of unknown,  159, 175 inefficiency in,  147 Medical Research Laboratory philosophy,  119–120 meeting in person vs. via correspondence,  136 from Navy, in maser-laser research,  108–109 peer review, see Peer preview proposals, as obstacle to research,  3 research quality and,  175 research selection, affect on,  2 return on investment,  6 risk management and,  175 vs. scientists’ duty to profession,  111 Smalley on,  153–154 social pressures,  54, 111 trust and,  93, 94, 98 in UK, affecting research and development,  9–15 See also Freedom and creativity Population growth rates,  28, 51 Porter, George,  20 Prebble, John,  130 Prediction of outcomes and financial impact Geim and Novoselov on,  177–178 Jeffreys and DNA fingerprinting,  176 as poor investment,  174, 175 risk and,  176, 179 in United States,  187–188 See also Coincidence and accidents, role in scientific discovery; Funding; Unknown, exploration of Pressure, on scientists competition. see Competition efficiency and risk, see Efficiency and productivity funding, see Funding grant proposals, see Funding impact, of research, see Impact, of research Price, David,  185 Private investors. See Funding

218 Productivity and efficiency. See Efficiency and productivity Progress. See Scientific discovery, research, and advancement Prokhorov, Alexander,  100, 108 Proposals (for grants and funding) blocking of,  189, 190 Engineering and Physical Sciences Research Council,  189–190, 190 impact and, see Impact, of research Jeffreys and,  176 Kroto and the EPSRC,  155–156 as obstacle to research,  3, 49 Pathways to Impact initiative,  10 peer review aspect of, see Peer preview profitable returns,  154–155 success rates,  190 See also Prediction of outcomes and financial impact Prosperity. See Economic growth; Scientific discovery, research, and advancement; Standard of living Proteins dogma of,  160–163, 164 exons and,  168 function of,  161 gene activity proposal,  169 introns and,  166 synthesis of,  117 three-dimensional structures of,  113, 114 viruses and,  112 Public funding,  3, 9, 11, 17, 25, 179, 187, 188 Publication, pressure of,  97–98 Purcell, Ed,  69, 104 Quality of life. See Standard of living “Quality” of research,  175 Quantum theory and mechanics,  48, 63, 64, 66, 72, 74, 77, 105 Rabi, Isidor,  100, 104, 105 Racker, Efraim,  132–133, 134, 135 Radar research, Townes,  102 Radiation, α-rays and β-rays,  57 Radiation, blackbody,  43, 44 Radiation, cathode,  54–55

INDEX

Radio astronomy,  69, 102, 146, 147 Radioactive decay,  70 Radioactivity,  57 Raman, C.V.,  145–146 Ramsey, Don,  145 Ramsey, William,  20 Randolph, Lowell,  90, 91 “Raspberry Pi,”  182 Rayleigh, Lord. See Strutt, John William Realising Our Potential (White Paper,  1993),  9 Red giant stars,  150, 151 Rees, Martin,  183 Reines, Frederick,  70 Religion, effect on scientific discovery,  24–25, 163–164 Research. See Scientific discovery, research, and advancement Research, foreseeable benefits of. See Prediction of outcomes and financial impact Research and Development (R&D). See Economic growth; Scientific discovery, research, and advancement Research Assessment Exercises (RAE),  10, 12 Research Councils UK (RCUK) impact agenda meeting,  11 impact methodologies,  13–15 open letter to, Times Higher Education,  10–11, 194–196 Royal Charters,  9–10 Research Excellence Framework (REF),  12–13 Research grants. See Funding; Proposals Resource allocation. See Allocation of resources Reversible processes,  42–43 Revolutionary ideas Avery and DNA,  82, 86, 87, 160 current policies,  159 Darwin and,  159–160 Darwin and evolution,  159–160 Mattick and RNA proposal,  171, 172, 173 McClintock and transposons,  92, 97, 160

INDEX

Mitchell and chemiosmotic theory,  132–135 “ox phos wars,”  133–136 vs. peer preview policies,  174 of religion, on science,  163–164 response to new discoveries,  165–166, 173 Townes and Apollo project,  103 vs. vested interests,  158 Woese and archaea domain,  120–121, 160 See also Coincidence and accidents, role in scientific discovery Rhoades, Marcus,  93, 164 Rhoads, Sara Jane,  148 Ribonucleic acid. See RNA (ribonucleic acid) Ribosomal RNA (rRNA),  117, 121 Rich, Peter,  136, 137 Ring of life diagram,  122 Risk attitudes towards,  175 of criticism by peers,  174, see also Controversy and opposition; Peer preview education and,  143–144 exploring the unknown in research,  175, see also Unknown, exploration of governance of,  33 investment and,  8 as policy component,  33 to scientists’ careers,  97 Rivera, M.C.,  122 RNA (ribonucleic acid) ATP and,  129 dogma of,  160–163, 164 gene activity proposal,  169 Mattick’s proposal of,  171–172 microRNAs,  168 phylogenetic tree with Archaea,  121 sequencing,  117, 120 structure of,  167 The Road to Serfdom (Hayek),  4 Roberts, Richard,  162, 164, 165 Rockefeller, John D.,  3, 15, 71, 82, 118, 119 Rogers, Will,  26

219 Röntgen, Wilhelm,  57 Roosevelt, Franklin Delano,  5, 93 Roux, Jean-Claude,  183 Royal Institute, London,  17 Royal Institution of Great Britain,  18–21 Rumford, Count. See Thompson, Benjamin Russell Group, on measurement of economic impact of research,  11 Rutan, Burt,  184 Rutherford, Ernest,  56–60 Bohr and,  60, 62, 64 early life of,  56 education of,  57 funding for,  59 humility and,  60 Nobel Prize,  57–58 research and discoveries of,  51, 60, 76 Sandwalk (blog),  173 Sanger, Fred,  117, 130 Schawlow, Arthur,  100, 106, 107 Scheherazade (character),  184 Schrödinger, Erwin,  71, 72, 74, 75, 76 Schrödinger equation,  72 Schumpeter, Joseph,  4 Science,  120–121, 186, 192–193 Science and Engineering Research Council (SERC),  150 Science: The Endless Frontier (Bush),  5 Scientific Autobiography and Other Papers (Planck),  48 Scientific discovery, research, and advancement,  174–193 on carbon, see Carbon; Kroto, Harry coincidence and accidents, role in,  16–26 completeness concept,  124–125 drawbacks to,  179 education and,  32, 34, 143, 144, 181 effects of policy on,  2 financial crisis and,  175 funding, as percentage of GDP,  7 future expectations of,  4 growth in GDP,  181 ignorance, role of,  5, 6, 8, 49, 191 impact of, see Impact, of research needs to advance,  191

220 Scientific discovery, research, and advancement (continued) Nobel Prizes awarded by country,  188 proposal success rates, EPSRC,  190 publication and pressure,  97–98 religion, restrictions of,  24–25, 163–164 research quality and,  175 response to,  165–166 revolutionary ideas. see Revolutionary ideas rules governing,  190 society’s role in,  179 unknown, dependence on, see Prediction of outcomes and financial impact; Unknown, exploration of by Venture Research project,  183 See also specific discoveries; Economic growth; Golden Age of Physics; Standard of living Scientific Freedom: The Elixir of Civilization (Braben),  15, 35, 100, 182, 183, 191 Screening, of research proposals. See Funding; Peer preview; Proposals Second Law of Thermodynamics,  41 Seddon, Ken,  150, 151, 182, 183 Self, Colin,  183 Shakespeare, William,  53 Sharp, Philip,  162, 164, 165 Silverstein, Barry,  184 Sklodowska, Marie. See Curie, Marie Slater, E.C.,  130, 131, 132, 134 Smalley, Richard,  148, 149, 150, 153–154, 153 Smith, Adam,  35 Smith, Lamar,  188 Smolin, Lee,  77 Social pressures, effect on research. See Competition; Controversy and opposition; Pressure on scientists Soddy, Frederick,  57, 62, 76 Solow, Robert,  15, 32–33, 35, 36 Sommerfeld, Arnold,  66, 68, 70 Southey, Robert,  19 Sparks, Steve,  183

INDEX

Spectroscopy,  145, 146, 149–150. See also Kroto, Harry Spin, electrons,  66, 68–69 Splicing, of genetic material,  162, 169 Sputnik 1 satellite,  103, 148 Stadler, Lewis,  92 Standard of living Hobbes on,  29 intellectual property rights,  30 maser-laser technology and,  108 medical technology,  104 military advances,  102, 104–105, 106, 108–109 population growth rates,  28 quantum mechanics effect on,  75 research quality and,  175 technical change and,  32–33, 36 Stanier, Roger,  116 Stanley, Gene,  183 Steacie, E.W.R.,  145 Steam engines,  41 Steel, Iain,  183 Stefan, Joseph,  145 Stent, Gunther,  112 Stephenson, George,  41 Stokes, Henry,  16 String theory,  77 Strutt, John William “Lord Rayleigh,”  20, 53, 61 Sulston, John,  192 Sun, as source of energy for Earth,  127 Survival. See Standard of living Swann, Michael,  130 Sweden, funding, as percentage of GDP,  188 Swift, Jonathan,  92–93 Swinney, Harry,  vi, 138, 183 Taxpayers, as funding source,  3, 9, 11, 17, 25, 179, 187, 188 Teixeira, José,  183 The Telegraph,  156–157 Theodor, Karl,  18 The Theory of Model Sentiments (Adam Smith),  35 Theory of Relativity Boltzmann,  46 de Broglie and,  71, 72

INDEX

discovery of,  76 inspiration for,  66 publication of,  47–48 Thermodynamics,  40–41, 44, 132, See also Planck, Max Thompson, Benjamin (Count Rumford) caloric theory and,  42 career and inventions of,  17–18 early life of,  17 motivation of,  22 move to Paris,  19–20 Royal Institution of Great Britain,  18–21 Thomson, Joseph John,  53–56 Bohr and,  61 Nobel Prize,  55 photograph of,  56 research and discoveries of,  51, 76 Rutherford and,  57, 60 Thomson, William,  42 Thorpe, Alan,  11 Times Higher Education open letter to, Appendix 1, 10–11 world’s top chemists,  182 Tofts, Chris,  183 Townes, Charles,  99–109 career,  101–102, 103 early life of,  100–101 Nobel Prize,  100 opposition faced,  103 research and discoveries of,  100, 104–107, 163 research that paved the way for,  99 technology resulting from discoveries,  108–109, 146 thesis,  101 Transformative Research initiative,  187 Transposable elements. See Transposons Transposons,  91, 97, 160, 163, 167, 169 Trust, in research,  181, 192 Turner, Richard,  148 Twain, Mark,  26 Tyndall, John,  23 Uhlenbeck, George,  68, 76 Uncertainty Principle,  72, 106, 107

221 United Kingdom chemistry department closures,  156–157 economic growth, and technology,  34 Engineering and Physical Sciences Research Council,  14, 155–156 funding, as percentage of GDP,  7 funding in,  188 growth in GDP,  181 Industrial Revolution,  29–30, 31, 41 intellectual property rights,  30 Medical Research Laboratory,  118–120 Nobel Prizes awarded to,  188 Nobel Prizes, percentage of,  14 research and development issues in,  9–15 Royal Institution of Great Britain,  18–21 Science and Engineering Research Council (SERC),  150 Science Museum,  182 standard of living,  33 Thompson, Benjamin,  17–19 world’s top chemists,  182 United States economic growth,  31–32, 34 growth in GDP,  181 Nobel Prizes awarded to,  14, 188 research freedom and,  187–188 Thompson, Benjamin,  17–18 top chemists,  182 Unknown, exploration of Avery and DNA,  82, 86, 87, 160 current policies,  159 Darwin and evolution,  159–160 Fleming and,  13 Mattick and RNA proposal,  171, 172, 173 McClintock and transposons,  92, 97, 160 Mitchell and chemiosmotic theory,  132–135 “ox phos wars,”  133–136 vs. peer preview policies,  174 vs. policies,  159, 175 of religion, on science,  163–164

222 Unknown, exploration of (continued) response to new discoveries,  165–166, 173 risk and, see Risk Townes and Apollo project,  103 Woese and archaea domain,  120–121, 160 See also Coincidence and accidents, role in scientific discovery; Revolutionary ideas van der Vink, Gregory,  192–193 van Niel, C.B.,  116, 120 van Wyhe, John,  159 “Vectorial chemistry,”  131 Venture Research project BP funding and,  154, 182 discoveries made by,  183 founding of,  84 freedom of research in,  141, 181 investment in,  183–184 Mitchell and,  137 need for,  191 Nobel Prize criteria,  184 proposals in,  182 Seddon and,  150–151 successes in,  182 Swinney-DeKepper collaboration,  138 University College London initiative,  185–187, 191 Venture Research International Ltd (VRi),  183 Visionary ideas. See Scientific discovery, research, and advancement; Unknown, exploration of von Hevesy, Georg,  62 von Stradonitz. See Kekulé, August Waddington, Conrad Hal,  141 Wallace, Alfred,  159 Walton, David,  147

INDEX

Walton, Ernest,  60 Watson, James,  111, 112–114 Watt, James,  19, 20 Weber, Bruce,  130 Weber, Heinrich,  52 Weiner, Alan,  120 Wellcome Trust,  170 Westheimer, Frank,  166 Where Is Science Going? (Planck),  38–39 Wilhelm II (Kaiser),  145 Wilkins, Maurice,  112 Willetts, David,  11 William, of Occam,  29 Wilson, R.W.,  146 Woese, Carl,  110–125 DNA double helix,  113 early life of,  114–115 niches, of organisms,  123 opposition faced,  120–121, 160 phylogenetic tree with Archaea,  121 research and discoveries of,  115–123 research approach,  131 research that paved the way for,  110–114 ring of life,  122 UK Medical Research Laboratory,  118–120 Wolfe, Ralph,  117, 120 Woods, Robin Arthur,  88 Wordsworth, William,  19 The World (ship),  184 World War II, scientific progress and,  4–5 X-Prize,  184 Z electrons,  60 Zea mays,  90, 94, 95–96 Zeeman effect,  68 Zeigler, Herb,  106 Zuckerman,  146