The Practice of Data Analysis: Essays in Honor of John W. Tukey [Course Book ed.] 9781400851607

This collection of essays brings together many of the world's most distinguished statisticians to discuss a wide ar

162 40 21MB

English Pages 352 [351] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Practice of Data Analysis: Essays in Honor of John W. Tukey [Course Book ed.]
 9781400851607

Table of contents :
Contents
Preface
Opening Material
Introductory Remarks by the Editors
Biographical Information
Curriculum Vitae of John Wilder Tukey
Ph.D. Theses Directed by John W. Tukey Princeton University, 1940-1990
Partial List of John W. Tukey's Grandstudents
A Conversation with John W. Tukey
Elizabeth Tukey's Speech
Program of the Conference in Honor of John W. Tukey on His 80th Birthday
List of Participants
Scientific Papers
Errors-in-Variables Regression Estimators That Have High Breakdown and High Gaussian Efficiency
The Analytic Jackknife
Assessing Connections in Networks of Biological Neurons
Estimating Abundances for a Breeding Bird Atlas
Statistical Methods, Graphical Displays, and Tukey's Ladder of Re-Expression in the Analysis of Nonindependence in Contingency Tables: Correspondence Analysis, Association Analysis, and the Midway View of Nonindependence
Some Additional Notes on the "Princeton Robustness Year"
Tracking Chess Players' Abilities
Speculations on the Path of Statistics
A Regression Analysis with Categorical Covariables, Two-way Heteroscedasticity, and Hidden Outliers
Mean Square over Degrees of Freedom: New Perspectives on a Model Selection Treasure
Geographical Trends in Cancer Mortality: Spatial Smoothers and Adjustment
Covering Designs in Random Environments
Gaussianizing Transformations and Estimation
The Tennessee Study of Class Size in the Early School Grades
On the Distribution of Order Statistics from a p-wild Distribution
Resistant Modeling of Income Distributions and Inequality Measures
Bonus Decompositions for Robust Analysis of 2™ Factorial Experiments
The Philosophical Past and the Digital Future of Data Analysis: 375 Years of Philosophical Guidance for Software Design on the Occasion of John W. Tukey's 80th Birthday
REFERENCES

Citation preview

T H E P R A C T I C E OF DATA ANALYSIS

John W. Tukey at Bell Labs, 1985.

T H E PRACTICE OF DATA ANALYSIS ESSAYS IN HONOR OF

J O H N W. T U K E Y

Edited by D.

R.

L. T. S.

BRILLINGER FERNHOLZ

MORGENTHALER

PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Copyright © 1997 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540 In the United Kingdom: Princeton University Press, Chichester, West Sussex All Rights Reserved LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA

The practice of data analysis : essays in honor of John W. Tukey I edited by D. R. Brillinger, L. T. Fernholz, S. Morgenthaler. p. cm. Includes bibliographical references. ISBN 0-691-05782-6 (alk. paper) 1. Mathematical statistics. 2. Tukey, John Wilder, 1915- . I. Tukey, John Wilder, 1915- . II. Brillinger, David R. III. Fernholz, Luisa Turrin. IV. Morgenthaler, Stephan. QA276.16.P73 1998 519.5—DC21 97-19695 The publisher would like to acknowledge the authors of this volume for providing the camera-ready copy from which this book was printed Princeton University Press books are printed on acid-free paper and meet the guidelines for permanence and durability of the Committee on Production Guidelines for Book Longevity of the Council on Library Resources http://pup.princeton.edu Printed in the United States of America 1 3 5 7 9

10 8 6 4 2

CONTENTS Preface

vii

Opening Material Introductory Remarks by the Editors

3

Biographical Information

5

Curriculum Vitae of John Wilder Tukey

9

Ph.D. Theses Directed by John W. Tukey Princeton University, 1940-1990

16

Partial List of John W. Tukey's Grandstudents

19

A Conversation with John W. Tukey

26

Elizabeth Tukey's Speech

46

Program of the Conference in Honor of John W. Tukey on His 80th Birthday

48

List of Participants

49

Scientific Papers Errors-in-Variables Regression Estimators That Have High Breakdown and High Gaussian Efficiency Dhammika Amaratunga

57

The Analytic Jackknife David F. Andrews

67

Assessing Connections in Networks of Biological Neurons David R. Brillinger and Alessandro E. P. Villa

77

Estimating Abundances for a Breeding Bird Atlas Christopher A. Field

93

Statistical Methods, Graphical Displays, and Tukey's Ladder of Re-Expression in the Analysis of Nonindependence in Contingency Tables: Correspondence Analysis, Association Analysis, and the Midway View of Nonindependence Leo A. Goodman

101

Some Additional Notes on the "Princeton Robustness Year" Frank Hampel

133

Tracking Chess Players' Abilities John A. Hartigan

155

ν

vi

CONTENTS

Speculations on the Path of Statistics Peter J. Huber

175

A Regression Analysis with Categorical Covariables, Two-way Heteroscedasticity, and Hidden Outliers Mia Hubert and Peter J. Rousseeuw

193

Mean Square over Degrees of Freedom: New Perspectives on a Model Selection Treasure Clifford M. Hurvich

203

Geographical Trends in Cancer Mortality: Spatial Smoothers and Adjustment Karen Kafadar

217

Covering Designs in Random Environments Colin L. Mallows

235

Gaussianizing Transformations and Estimation Stephan Morgenthaler

247

The Tennessee Study of Class Size in the Early School Grades Frederick Mosteller

261

On the Distribution of Order Statistics from a p-wild Distribution Ha H. Nguyen

279

Resistant Modeling of Income Distributions and Inequality Measures Elvezio Ronchetti and Maria-Pia Victoria-Feser

287

Bonus Decompositions for Robust Analysis of 2™ Factorial Experiments Allan H Seheult

299

The Philosophical Past and the Digital Future of Data Analysis: 375 Years of Philosophical Guidance for Software Design on the Occasion of John W. Tukey's 80th Birthday PauJ F. Velleman

317

PREFACE John W. Tukey celebrated his 80th birthday on June 16, 1995. Such an occasion should, of course, not go unnoticed, and in order to allow as many people as possible to join in the celebration, it was natural to organize a symposium in his honor at Princeton University. At Luisa Fernholz's initiative we contacted people by mail, email, and phone. The response to our invitation was overwhelming. Over a hundred of his friends, students and co-workers from all over the world wanted to be part of the festivities and come to Princeton on June 19th and 20th, 1995. Thirteen speakers gave talks covering a wide range of topics befitting an event in honor of John Tukey. Throughout the two-day symposium a cordial and warm atmosphere was felt in which the participants expressed their gratitude and best wishes to John. The dinner in his honor on June 19th turned into a memorable celebration; we thank all the participants for their enthusiastic support of this festive event. The present festschrift is an outgrowth of this two-day symposium. The first part includes the speech that Elizabeth Tukey prepared for the dinner on June 19th, and a transcript of the conversation with John Tukey on the afternoon of June 20th, which we hope conveys the spirit of the conference. We are also including a short biography and a curriculum vitae. John Tukey has published more than five hundred papers and we do not reprint a bibliography since a detailed version appears in his collected works (Vol. I through VII, Wadsworth & Brooks/Cole, Pacific Grove, California, Vol. VIII, Chapman & Hall, London, UK). An aspect of John Tukey's personality that is less widely known is his great generosity with his ideas. He guided many graduate students to successful Ph.D.s and inspired their careers. A list of his students and a partial list of his grand-students is also included. The second part of the festschrift contains the contributed papers that some speakers, participants, and other researchers have prepared to honor John. We thank them for their contributions. We would also like to thank all the referees involved in this enterprise. They put in a considerable amount of work and were instrumental in ensuring a high quality. We would like to express our gratitude to Elizabeth Tukey for helping us capture the right spirit throughout the preparation of the symposium and for providing us with some essential background information towards the preparation of questions for the conversation with John. Elizabeth also provided us with the speech that she prepared for the dinner on June 19th,

vn

PREFACE

VlIl

and some valuable photographs that we reproduce here. The organization of the symposium would not have been possible without the financial support from the Minerva Research Foundation. We thank Prof. Joseph J. Kohn, chairman of the Mathematics Department of Princeton University, for sponsoring the symposium. We also thank the staff of the Mathematics Department and especially Eileen Olszewski for her dedicated effort in handling much of the organizational work. Anne-Lise Choulat, staff member of the Mathematics Department of the Swiss Federal Institute of Technology in Lausanne, did much of the work involved in the typesetting of the papers. We thank her for a job well-done. Princeton, April 15, 1996 David R. Brillinger, Luisa T. Fernholz, and Stephan Morgenthaler

OPENING

MATERIAL

INTRODUCTORY REMARKS BY THE E D I T O R S The opening material of this volume contains a short biography and a curriculum vitae which exhibit many facets of John W. Tukey's extraordinary life and personality. Statistics as a field has gained enormously in prestige thanks to John Tukey and one measure of his prolific energies and his profound influence is the long list of Ph.D. students and grand-students which follows the CV. Anyone who has had the privilege to hear lectures by John Tukey or to discuss statistical issues with him, knows what a deep source of insights, often delivered along with an anecdote, he is. The transcribed conversation printed in the first part of the book will allow the reader to glimpse Tukey's personal modesty and his extraordinary intellect and imagination. The opening material is rounded out by the program of the conference organized by the Minerva Foundation in honor of John Tukey's 80 th birthday, the list of participants and the speech prepared by Elizabeth Tukey for the official dinner held at this conference. The second part of the book contains a collection of papers dedicated to John Tukey by colleagues and students. These papers represent a variety of directions and research interests in statistics. The reader can appreciate that most of the papers presented here have as points of departure the original research of John W. Tukey. No collection of papers honoring John Tukey will be complete without an essay on the past and the future of statistics. Peter Huber speculates on these issues in his contribution which brings together insights from many eminent statisticians. John Tukey has pioneered robustness and has been a driving force in its development. One of the key events in this field was the "Princeton Robustness Year" held during 1970/71, when Peter Bickel, Frank Hampel, and Peter Huber joined Tukey to pursue further the work on robust statistics. Frank Hampel gives us his insider's view of this historical workshop, clearing up some misunderstandings and providing insights with several open problems. Six other papers deal with robustness or with resistant methods for data analysis. Dhammika Amaratunga studies robust alternatives to the least squares method in errors-in-variables regression models. A practical example showing how to perform a highly robust analysis of a data set with categorical variables is presented by Mia Hubert & Peter Rousseeuw. 3

4

JOHN W I L D E R T U K E Y

Stephan Morgenthaler examines an estimation procedure based on transforming the data to render it more Gaussian in some sense. Explicit formulas for the distribution of order statistics from j>wild distributions are derived by Ha Nguyen. Elvezio Ronchetti and Maria Pia Victoria-Feser discuss robustness issues in economic models - in particular distributions of incomes. Allan Seheult writes about robust methods for the analysis of designed experiments, namely "bonus decompositions" of 2™ factorials. Two areas that have attracted John Tukey's attention throughout his career are re-expressions and resampling. Leo Goodman uses Tukey's ladder of re-expressions in the study of non-independence in contingency tables. The paper on the analytic jackknife by David Andrews applies modern computer algebra to shed light on the properties of the jackknife. Cliff Hurvich investigates some of the properties of a model selection criterion, mean square over degrees of freedom, proposed by JWT in his discussion of a 1967 paper by Frank Anscome on linear modelling. A principal basis of Tukey's statistical thought is the science of statistics as it interacts not only with the other sciences but with every aspect of our lives. There are several articles in this collections that pay tribute to this basis, and they cover applications and case studies which are of interest not only to statisticians but to more general readers. The paper by Colin Mallows presents the problem of software testing in terms of classical design of experiments. Christopher Field gives a detailed and original case study on estimating abundance for breeding bird atlas. John Hartigan considers the ELO rating system for tracking chess players ability and suggests some ways for improvement and for use in other kinds of of paired comparisons. Karen Kafadar presents some methodology based on smoothing techniques to analyze cancer data. Frederic Mosteller links statistics and public policy regarding education in his paper on the Tennessee study of class size in elementary schools. David Brillinger and Alessandro Villa concern themselves with inferring connections amongst regions of the brain by analyses of spike trains from neurons of those regions. The philosophical discussion of Paul Velleman on the past and the digital future of Data Analysis, exposes some ideas of Sir Francis Bacon's on how scientific inquiry should proceed and compares Bacon's ideas with Tukey's philosophy on Data Analysis. We hope that this collection will provide the readers, especially young statisticians, with a useful and interesting bouquet of statistical problems as well as directions for further research. That would be our best tribute to John Tukey.

BIOGRAPHICAL INFORMATION John Wilder Tukey was born in New Bedford, Massachusetts, on June 16, 1915. John is the only child of Adah M. Tasker and Dr. Ralph H. Tukey, who met at Bates College (Lewiston, Maine) as members of the class of 1898. He earned bachelor's and master's degrees in chemistry at Brown University in 1936 and 1937, respectively. He then went to Princeton, where, in 1938 he was a Jacobus Fellow, and upon receiving his doctorate in mathematics in 1939, he was appointed Henry B. Fine Instructor in Mathematics. A decade later, at age 35, he was advanced to a full professorship. He directed Princeton's Statistical Techniques Research Group from its founding in 1956. When the Department of Statistics was formed in 1965, Tukey was named its first chairman, and held that post until 1970. He was appointed to the Donner Chair in 1976. He chaired the University (Academic) Schedule Committee from 1945 to 1970. At the same time, he was a Member of Technical Staff at AT&T Bell Laboratories in 1945, advancing to Assistant Director of Research, Communications Principles, in 1958 and, in 1961, to Associate Executive Director-Research, Information Sciences, which position he held until his retirement in 1985. John W. Tukey has attracted international attention for his studies in mathematical and theoretical statistics and their application to a wide variety of scientific and engineering disciplines. He has led the way in the nowburgeoning fields of Exploratory Data Analysis and Robust Estimation and his contributions to the Spectrum Analysis of Time Series and other aspects of Digital Signal Processing have been widely used in engineering and science. He has been credited with coining the word "bit," a contraction of "binary digit" which refers to a unit of information, often as processed by a computer. In addition to strong continuing interests in a wide variety of areas of statistical philosophy, techniques and application, John Tukey has been active in improving the access of the scientist to the scientific literature, particularly through the development of citation and permutation indices to the literature of statistics and probability. In another area, collaboration with a fellow mathematician resulted in the formation of the Cooley-Tukey Fast Fourier Transform (FFT) algorithm, a mathematical technique that greatly simplifies computation for Fourier series and integrals. Other interests range through applications to such fields as behavioral sciences, geophysics and pharmaceutical research. John Tukey's participation in educational, public, and government service is most impressive. Throughout World War II he participated in projects 5

6

JOHN WILDER TUKEY

assigned to Princeton's Fire Control Research Office, working on antiaircraft, armored vehicle and aircraft fire control problems. His wartime service with the Princeton Branch of the Frankfort Arsenal Fire Control Design Division marked the beginning of his close and continuing associations with governmental committees and agencies. He helped to supervise work in military systems analysis conducted in 1951-56 at Princeton's James Forrestal Campus under the joint sponsorship of the Department of Defense and the Bell Telelphone Laboratories. A member of the U.S. Delegation to Technical Working Group 2 of the Conference on the Discontinuance of Nuclear Weapons Tests in Geneva in 1959, he also served on the U.S. Delegation to the U.N. Conference on the Human Environment in Stockholm in 1972. Between 1950 and 1954, John Tukey served on a committee for the American Statistical Association to advise the National Research Council for Research in Problems of Sex. Subsequently, the committee prepared the report Statistical Problems of the Kinsey Report on Sexual Behavior in the Human Male. From 1960 to 1963, he served on the President's Science Advisory Committee and chaired that committee's Environmental Pollution Panel (1964-65) and Chemicals and Health Panel (1971-73), among others. He has been a member of the President's Air Quality Advisory Board, President Johnson's Task Force on Environmental Pollution and President Nixon's Task Force on Air Pollution and the National Advisory Committee on Oceans and Atmosphere. A former member of the National Science Foundation's Science Information Council (1962-64), Tukey chaired a National Academy of Science/National Research Council committee between 1975 and 1979 that investigated the potential danger of increased ultraviolet exposure (on the Earth) resulting from depletion of the atmosphere's protective ozone layer by fiuorocarbons, the propellants in many aerosol spray cans. Three major reports were a result of the committee's work: Halocarbons: Environmental Effect of Chloronuoromethane Release, Protection Against Depletion of Stratospheric Ozone by Chloronuoromethane Carbons, and Stratospheric Ozone Depletion by Halocarbons: Chemistry and Transport. The committee's results prompted the Food and Drug Administration to require that many aerosols be labeled hazardous to the environment. John Tukey served as chairman of the Technical Advisory Committee of the National Assessment of Educational Progress (NAEP) from its inception in 1963 and throughout its operation by the Education Commission of the States (which ended in 1982). In 1989, he was appointed a member of the Design and Analysis Committee (DAC), by the Educational Testing Service, which had taken over operation of NAEP.

OPENING MATERIAL

7

He was a member of the Board of Fellows of Brown University (which, with its Board of Trustees, makes up Brown's Corporation) for fourteen years (1974-88), serving also as chairman of the corporation's committee on Computers in Education. In 1965 John Tukey was named the first recipient of the Samuel S. Wilks Award of the American Statistical Association. He received the National Medal of Science in 1973 "for his studies in mathematical and theoretical statistics . . . and for his outstanding contributions to the applications of statistics to the physical, social, and engineering sciences". Tukey received the Shewhart Medal of the American Society for Quality Control in 1977; the Institute of Electronic and Electrical Engineers' 1982 Medal of Honor for the Cooley-Tukey Fast Fourier Transform (FFT) Algorithm; and the American Society for Quality Control's Deming Medal in 1983. In 1989 he was elected a Foreign Member of The Royal Society (London). Princeton University honored him in 1984 with the James Madison Medal, given annually to an alumnus of the Graduate School, who has had a distinguished career, who has advanced the cause of graduate education, or who has achieved a record of outstanding public service. In 1989 he received the Monie A. Ferst Award of Sigma Xi, and in 1990 the Educational Testing Service Award for Distinguished Service to Measurement. John Tukey taught on both the undergraduate and graduate levels and is widely sought as a seminar leader and lecturer. He holds honorary degrees from Case Institute of Technology, the University of Chicago, and Brown, Temple and Yale Universities. Author of Exploratory Data Analysis (now translated into Russian) and eight (to date) volumes of collected papers, he is co-author of: Statistical Problems of the Kinsey Report on Sexual Behavior in the Human Male; Data Analysis and Regression (also translated into Russian); Index to Statistics and Probability: Citation Index; Permuted Titles; Locations and Authors (four volumes); The Measurement of Power Spectra; and Robust Estimates of Location: Survey and Advances. He was co-editor of and contributor to Understanding Robust and Exploratory Data Analysis, Exploring Data Tables, Trends and Shapes, ConRgural Polysampling, and Fundamentals of Exploratory Analysis of Variance. John Tukey has written more than 500 technical papers and reports and he has served on editorial committees of several professional organizations. He has been represented frequently in publications such as the Annals of Mathematical Statistics (of which he was associate editor, 1950-52), Biometrics, the Journal of the American Statistical Association (JASA), Science, and Technometrics.

8

JOHN W I L D E R T U K E Y

President of the Institute of Mathematical Statistics in 1960, he has also served as Vice President of the American Statistical Association and as Chairman of both its Biometrics Section and its Section on Physical and Engineering Sciences. He has been a Member of the Council of the Biometric Society, and served on the National Research Council for more than a decade. He is a member of the National Academy of Sciences, and has served on its council (1969-72 and 1975-78) and as Chairman of Class III Applied Sciences (1969-72). He is a Member of the American Philosophical Society (Vice President, 1975-77) and the American Academy of Arts and Sciences, Honorary Fellow of the Royal Statistical Society (London), and a Foreign Member of The Royal Society (London). Among his other affiliations are the International Statistical Institute, the American Association for the Advancement of Science (Vice President, Section A, 1972 and Section U, 1974) and Sigma Xi. John Tukey has been a consultant (among others) to various pharmaceutical companies (over 40 years to Merk and Company) and, since retirement, to the Xerox Corporation (among others). He led the statistical component of NBC's election night vote projection effort in all major elections from 1960 to 1980, after which exit polls took over the role previously played by statisticians. John Wilder Tukey and Elizabeth Louise Rapp of Pemberton, NJ, were married in July 1950. Before their marriage, Mrs. Tukey was Personnel Director of the Educational Testing Service, Princeton.

C U R R I C U L U M V I T A E OF JOHN W I L D E R TUKEY

PERSONAL Born Married

New Bedford, MA, 16 June 1915 Elizabeth L. Rapp, 19 July 1950

EDUCATION 1936 1937 1938 1939 Honorary 1962 1965 1968 1969 1978

ScB., Chemistry, Brown University ScM., Chemistry, Brown University M.A., Mathematics, Princeton University Ph.D., Mathematics, Princeton University ScD. Degrees ScD., Case Institute of Technology ScD., Brown University ScD., Yale University ScD., University of Chicago ScD., Temple University

PROFESSIONAL POSITIONS AT&T Bell Laboratories, Murray Hill, NJ 1961-85 Associate Executive Director-Research, Information Sciences 1958-61 Assistant Director of Research Communications Principles 1945-85 Member, Technical Staff Princeton University, Princeton, NJ 1985-pres. Senior Research Statistician 1976-pres. Donner Professor of Science, Emeritus 1985-pres. Professor of Statistics, Emeritus 1976-85 Donner Professor of Science 1965-85 Professor of Statistics 1965-70 Chairman, Department of Statistics 1951-56 Supervisor, Military Systems Analysis, James Forrestal Campus 1950-65 Professor of Mathematics 1948-50 Associate Professor of Mathematics 1950-70 Chair, University (Academic) Schedule Committee

9

10

JOHN W I L D E R T U K E Y

PROFESSIONAL POSITIONS (cont'd) 1942-44 Technical Expert, Princeton Branch, Frankfort Arsenal Fire Control Design Division 1941-48 Assistant Professor of Mathematics 1941-44 Research Associate, Fire Control Research Office 1939-41 H.B. Fine Instructor in Mathematics 1938-39 Jacobus Fellow Other 1985-pres. Consultant, Xerox Corporation, Palo Alto, CA 1979 Visitor, Department of Scientific and Industrial Research, Wellington, New Zealand 1971&1979 Visitor, Commonwealth Scientific and Industrial Research Organization, Canberra, Australia 1972&1979 Visitor, Stanford Linear Accelerator Center, Stanford, CA 1978 Visitor, National Center for Atmospheric Research, Boulder, CO 1965-pres. Consultant, Educational Testing Service, Princeton, NJ 1960-1980 Consultant, National Broadcasting Company Election News, New York, NY 1952-pres. Consultant, Merck and Company, West Point, PA E D U C A T I O N A L SERVICE Brown University, Providence, RI 1974-88 Board of Fellows (Member of the Corporation) 1974-88 Chairman, Committee on Computers in Education 1974-88 Member, Library Committee 1983-88 Member, Consultation Committee Massachusetts Institute of Technology, Cambridge, MA 1975-pres. Member, Corporation Visiting Committee, Department of Mathematics Stevens Institute of Technology, Newark, NJ 1983-pres. Member, Visiting Committee Pure & Applied Mathematics Carnegie Mellon University, Pittsburgh, PA 1992-pres. Member, Department of Statistics Advisory Board

11

OPENING MATERIAL

P U B L I C SERVICE National Assessment of Educational Progress Princeton, NJ Educational Testing Service 1989-91 Member, Design and Analysis Committee National Assessment of Educational Progress, Denver, CO Education Commission of States 1973-82 Chairman, Science Panel of Analytical Advisory Committee 1963-73 Chairman, Analytical Advisory Committee National Academy of Sciences, Washington, D. C. 1961-pres. Member 1969-71 Member of the Council 1969-78 Class III: Chairman; 1969-72; 1975-78 National Research Council 1975-79 Chairman, Committee on Impacts of Stratospheric Change, which prepared: Halocarbons: Environmental Effect of Chloroeuoromethane Release, Protection Against Depletion of Stratospheric Ozone by ChloroRuoromethane Carbons, and Stratospheric Ozone Depletion by Halocarbons: Chemistry and Transport Health Effects Institute, Boston, MA 1981-92 Member, Health Research Committee American Academy of Arts and Sciences, Boston, MA 1981-83 Member, Class I Committee on Membership Center for Advanced Study in the Behavioral Sciences, Stanford, CA 1980-pres. Member, Advisory Committee on Special Projects American Statistical Association, Washington, DC 1950-54 Member, Committee to advise the National Research Council Committee for Research in Problems of Sex, which prepared: Statistical Problems of the Kinsey Report on Sexual Behavior in the Human Male G O V E R N M E N T SERVICE United States Department of Commerce, National Precipitation Assessment Program 1989-pres. Member, Oversight Review Board I and II

Acid

12

JOHN W I L D E R T U K E Y

G O V E R N M E N T SERVICE (cont'd) United States Department of Commerce, Bureau of the Census 1989-91 Member, Special Advisory Panel on 1990 Census National Advisory Committee for Oceans and Atmosphere 1975-77 Member President's Commission on Federal Statistics 1970-71 Member of commission which prepared: Federal Statistics (two volumes) President's Air Quality Advisory Board 1968-71 Member National Science Foundation 1962-64 Member, Science Information Council President's Science Advisory Committe, Office of Science and Technology 1960-63 Member 1971-72 Chairman, PSAC Panel on Chemicals and Health which prepared: Chemicals & Health] 1964-65 Chairman, PSAC Panel on Environmental Pollution which prepared the report: Restoring the Quality of our Environment Advisory Commission on Weather Control 1977-78 Chairman, Statistics Panel United States State Department, US Delegation 1972 UN Conference on the Human Environment, Stockholm, Sweden 1959 Technical Working Group 2 of the Conference on the Discontinuance of Nuclear Weapon Tests, in Geneva, Switzerland President's Post Election Task Force 1968 (Nixon) Pollution 1964 HONORS 1990 1989 1984 1983

(Johnson) Environmental Pollution

Educational Testing Service Award for Distinguished Service to Measurement Monie A. Ferst Award of Sigma Xi James Madison Medal, Princeton University Deming Medal, American Society for Quality Control

O P E N I N G MATERIAL

13

H O N O R S (cont'd) 1982 Medal of Honor, Institute of Electronic and Electrical Engineers (IEEE) for the Cooley-Tukey Fast Fourier Transform (FFT) Algorithm 1977 Shewhart Medal, American Society for Quality Control 1975 Hitchcock Professor, University of California, Berkeley 1973

National Medal of Science "For his studies in mathematical and theoretical statistics . . . and for his outstanding contributions to the applications of statistics to the physical, social, and engineering sciences." 1965 S.S. Wilks Medal, American Statistical Association Member National Academy of Sciences Member American Philosophical Society, Vice President 1974-77 Member American Academy of Arts and Sciences Member International Statistical Institute Member Sigma Xi Member New York Academy of Sciences Foreign Member The Royal Society (London) Hon. Member Royal Statistical Society (England) FELLOWSHIPS 1957-58 Fellow at the Center for Advanced Study in Behavioral Sciences 1949-50 Guggenheim Fellow PROFESSIONAL SOCIETIES Fellow Institute of Mathematical Statistics, President, 1960 Fellow American Statistical Association, Vice President, 1955-57; Chairman Biometrics Section and Physical & Engineering Sciences Section Fellow American Society for Quality Control Fellow American Association for the Advancement of Science Chairman, Section A, 1972 Chairman, Section U, 1974 Member Biometric Society: Member of the Council

14

JOHN WILDER TUKEY

PROFESSIONAL SOCIETIES (cont'd) Member Association for Computing Machinery Member Operations Research Society of America Member Mathematical Association of America Member American Mathematical Society Member International Statistical Institute O T H E R ORGANIZATIONS Member Cosmos Club, Washington, DC Member Nassau Club, Princeton, NJ PUBLICATIONS I N BOOK F O R M Convergence and Uniformity in Topology Annals of Mathematics, No. 2, Princeton University Press, NJ, 1940 Statistical Problems of the Kinsey Report on Sexual Behavior in the Human Male (W.G.Cochran, F. Mosteller, J.W. Tukey) American Statistical Association, Washington, 1954 The Measurement of Power Spectra from the Point of View of Communications Engineering (R.B. Blackman and J.W. Tukey), Dover Publications, NY, 1959 Exploratory Data Analysis, Limited Preliminary Edition, Volumes l-III, 1970-71; First Edition, 1977 Robust Estimates of Location: Survey and Advances (D.F. Andrews, P J . Bickel, F. R. Hampel, P J . Huber, W.H. Rogers and J.W. Tukey), Princeton University Press, NJ, 1972 Index to Statistics and Probability, The R&D Press, Los Altos, CA Vol. 1, The Statistics Cumlndex, J.L. Dolby and J.W. Tukey (1973) Vol. 2, Citation Index, J.W. Tukey (1973) Vol. 3, Permuted Titles Α-Microbiology, LC. Ross and J.W. Tukey (1975) Vol. 4, Permuted Titles Microclimatic-Z, LC. Ross and J. W. Tukey (1975) Vol. 5, Locations and Authors, LC. Ross and J.W. Tukey (1973)

OPENING MATERIAL

15

PUBLICATIONS IN BOOK F O R M (cont'd) Vol. 6, Index to Minimum Abbreviations, LC. Ross and J.W. Tukey (in preparation) Data Analysis and Regression, F. Mosteller and J.W. Tukey, Addison-Wesley Publishing Company, Reading, MA, 1977 Understanding Robust and Exploratory Data Analysis (D.C. Hoaglin, F. Mosteller and J.W. Tukey, eds), John Wiley & Sons, Inc., NY, 1983 Exploring Data Tables, Trends and Shapes (D.C. Hoaglin, F. Mosteller and J.W. Tukey, eds), John Wiley & Sons, Inc., NY, 1985 Collected Works of John W. Tukey, Wadsworth Advanced Books & Software, Monterey, CA Vol. I, Time Series: 1949-1964, (D.R. Brillinger, ed), 1984 Vol. II, Time Series: 1965-1984, (D.R. Brillinger, ed), 1985 Vol. Ill, Philosophy and Principles of Data Analysis: 1949-1964, (L.V. Jones, ed), 1986 Vol. IV, Philosophy and Principles of Data Analysis: 1965-1986, (L.V. Jones, ed), 1986 Vol. V, Graphics, (W.S. Cleveland, ed), 1988 Vol. VI, More Mathematical: 1938-1984, (CL. Mallows, ed), 1990 Vol. VII, Factorial and Anova, 1949-1962, (D.R. Cox, ed), 1992 Collected Works of John W. Tukey, Chapman & Hall, Inc., NY Vol. VIII, Multiple Comparisons, 1948-1983, (H.I. Braun, ed), 1994 Others in preparation ConRgural Polysampling: A Route to Practical Robustness (S. Morgenthaler and J.W. Tukey, eds), John Wiley & Sons, Inc., NY, 1990 Fundamentals of Exploratory Analysis of Variance (D.C. Hoaglin, F. Mosteller and J.W. Tukey, eds), John Wiley & Sons, Inc., NY, 1991

In addition, there are more than 500 technical papers on statistics, other scientific subjects, and mathematics authored by J.W. Tukey. October 1996

— PH.D.

T H E S E S D I R E C T E D BY J O H N W.

PRINCETON UNIVERSITY,

TUKEY —

1940—1990

(not officially) Frederick Mosteller (1946): On same useful Inefficient statistics. John E. Walsh (1947): Some significance tests for the median which are valid under very general conditions. Melvin P. Peisakoff (1950): Transformation

parameters.

Bernard Sherman (1950): A random variable related to the spacing of sample values. Leo A. Goodman (1950): The estimation of population size using sequential sampling tagging methods. Paul Meier (1951): Weighted means and lattice designs. Ray Bradford Murphy (1951): On tests for outlying observations. Alan T. James (1953): Group methods in normal multivariate theory.

distribution

David L. Wallace (1953): Confidence regions for the location of the vertex in quadratic regression. Richard F. Link (1954): Some statistical techniques useful for estimating the mean life of a radio-active source. (not officially) Marvin L. Minsky (1954): Neural-analog networks and the brain. Ralph Wormleighton (1955): Some extensions for the sign test. Arthur P. Dempster (1956): The two sample multivariate problem in the degenerate case. Thomas E. Kurtz (1956): An extension of a multiple comparisons procedure. N. Roy Goodman (1957): The joint estimation of the spectra, cospectrum and quadrature spectrum of a two-dimensional stationary Gaussian process. Bradley D. Bucher (1957): The recovery of intervariate information in incomplete block designs. James G.C. Templeton (1957): A test for detecting single-cell distribution in contingency tables. Harvey J. Arnold (1958): Permutation support for multivariate David R. Brillinger (1961): Asymptotic dimensional case. 16

techniques.

means and variances in the k-

17

OPENING MATERIAL

Donald S. Burdick (1961): Stage by stage modincation of polynomial estimators by the jackknife method. John A. Hartigan (1962): invariant Bayesian and non-Bayesian inversions. Peter B. Nemenyi (1963): Distribution-free multiple comparisons. Thomas H. Wonnacott (1963): A Monte-Carlo method of obtaining the power of certain tests of location when sampling from non-normal distributions. Morton B. Brown (1965): A secondarily Bayes approach to the two-means problem. James R. Thompson (1965): A shrinking technique for modifying variance unbiased linear estimators.

minimum

W. Morven Gentleman (1966): Robust estimation of multivariate location by minimizing P-th power deviations. James J. Filliben (1969): Simple and robust linear estimation of the location parameter of a symmetric distribution. Charles Lewis (1970): The countback method for analyzing data.

sensitivity

Stanislaus Michael D'Souza (1971): A statistical model for interlive birth intervals on non-contraceptive populations. Jon Helge Knudsen (1971): Dynamical modeling of truncated equations in spectral form for nondivergent barotropic Bow.

moment

James J. Schlesselman (1971): Data transformation in two-way analysis of variance. David C. Hoaglin (1971): Optimal invariant estimation of location for three distributions and the invariant efficiencies of some other estimators. Alan Gross (1973): Robust confidence intervals for the location of longtailed symmetric distributions. Anita Nowlin (1973): Statistical analysis of linguistic word frequency distributions and word length sequences. Edward Binkowski (1974): Optimal estimates and robust

estimation.

Steven Finch (1974): Univariate robust test of symmetry. Lincoln Polissar (1974): Parametrizing age distributions of death by cause. Paul Velleman (1976): Robust non-linear data smoothers - theory, definitions, and applications. Tony Kwok Sen Quon (1976): Optimal invariant estimation of location for very small samples.

18

Susan Peterson Arthur (1979): Skew/stretched statistic.

JOHN WILDER TUKEY

distributions and the t-

Michael Schwarzschild (1979): JVew observation-outlier-resistant of spectrum estimation.

methods

Karen Kafadar (1979): Robust conndence intervals for the one- and twosample problem. Roberta Guarino (January, 1981): Robust estimation under a unite number of alternatives; compromise estimates of location. Katherine Bell Krystinik (October, 1981): Data modiGcations based on order; Pushback; a conGgural polysampling approach. Paul Horn (October, 1981): On simple robust conndence procedures. Stephan Morgenthaler (July, 1983): Robust conhdence intervals for location and scale parameters: the con&gural approach. Fanny L. O'Brien (April, 1984): PolyefEcient and Polyeffective simple linear regression estimators and the absolute polyefficiency of the biweight regression estimator. Dhammika J. Amaratunga (October, 1984): Pushing back regression coefficients and evaluating performance via orthogonal samples. Clifford M. Hurvich (July, 1985): A unified approach to spectrum estimation: objective estimate choice and generalized spectral windows. George S. Easton (August, 1985): Finite-sample and asymptotic approaches to compromise estimation including compromise maximum-likelihood estimators. Ha Hung Nguyen (October, 1986): Approximation of the optimum Pitman compromise estimate in 0 'Brian's case, investigated in terms of a single configuration. David M. Brown (June, 1987): Maximization of an equal-size two-sample Student's t statistic over a simple family of transformations of the samples. Eugene G. Johnson (June, 1988): Robust analysis of factorial designs via elemental subsets and outHer sterilization. Katherine Mary Hansen (June, 1989): Some statistical problems in geophysics and structural geology.

P A R T I A L L I S T OF J O H N W. T U K E Y ' S GRANDSTUDENTS David L. Wallace's students: — Robert Ellison (1960): A multivariate k-population classiBcation problem. — Ying Yao (1962): On the comparison of the means of two normal populations with unknown variances. — Hee Bok Park (1964): Improving the robustness of inferences. — Alastair Scott (1965): AJiocation of effort in the design of selection procedures. — Mitchell Snyder (1966): Winsoring with a co-variate to improve efficiency. — James Yarnold (1968): The accuracy of seven approximations for the null distribution of the chi-square goodness of fit statistic. — Lily Sanathanan (1969): Estimating population size in the particle scanning context. — James Landwehr (1972): Approximate conndence regions from cluster analysis. — Stella Machado (1976): Iransformaiions of multivariate data and tests for multivariate normality. — Andries DeWet (1977): Estimation of Jinear relationships variables subject to random errors.

between

— C. Hendricks Brown (1981): Missing values in factor analysis. — Daniel Schafer (1982): Use of the correction for attenuation with judgmental information.

estimator

— Theodore Karrison (1985): Restricted mean life with adjustment for covariates. — Steven J. Skates (1987): Laplacian and uniform expansions with applications to multidimensional sampling. — Louis Rizzo (1989): Predictive regression estimators of the finite population mean using functions of the probability of selection. — Li Ming Dong (1995): Adjustments clinical trials. 19

for covariates in the analysis of

20

JOHN W I L D E R T U K E Y

Leo A. Goodman's students: — Albert Madansky (1958): Identification and estimation in latent class analysis. — Shelby J. Haberman (1970): The general log-linear model. — Thomas W. Pullum (1971): Occupational mobility and constrained movement over categories. — Robert E. Fay (1974): Statistical considerations in estimating current population of the United States.

the

— Clifford C. Clogg (1977): Measuring underemployment: demographic indicators for the United States, 1969-1973. Deceased. David R. Brillinger's students: — Raju J. Bhansali (1971): Asymptotic properties of the Wiener-Kolmogorov predictor. — Alan J. Izenman (1972): Reduced-ranJc regression for the multivariate linear model. — Andrey Feuerverger (1973): On the cumulant spectral approach to polynomial regression of stationary time series. — Pedro A. Morettin (1973): Walsh-Fourier analysis of time series. — Luis Torres-Melo (1974): Stationary point processes. — John A. Rice (1974): Statistical analysis of self-exciting point processes. — Vural A. Akisik (1975): On the estimation of parametric functions.

transfer

— Sadru Fazal (1975): Nonparametric discriminant analysis. — Tore Schweder (1975): Transformations of point processes: applications to animal sighting. — Mamoud Daneshmand (1976): Sampling and time series analysis. — Jostein Lillestol (1976): Improved estimates of parameters in complex normal models. — Haiganoush K. Preisler (1977): Statistical models for poplulations of sickle and normal blood cells.

21

OPENING MATERIAL

— Anthony D. Thrall (1978): Spectral estimation for randomly sampled time series. — J. Stanford Willie (1979): Analyzing relationships between a time series and a point process. — Tariq Hasan (1979): Complex demodulation: some theory and some applications. — Knut K. Aase (1979): Recursive estimation in time series models. — Peter M. Guttorp (1980): Estimation in population processes. — Stephen P. Ellis (1981): Density estimation for point process data. — Benjamin Sagalovsky (1982): Maximum likelihood and related estimation methods in point processes and point process systems. — Manuel Folledo (1983): Robust/resistant of the evoked response curve.

methods in the estimation

— Shean-Tsong Chiu (1984): Statistical estimation of the parameters of a moving source from array data. — G. Ross Ihaka (1985): Ruaumoko. — S. Aik Quek (1987): The mixed effects model: ntting and validation. — Etsuo Miyaoka (1987): Estimation in mixed Poisson process models. — Ettore Marchetti (1987): Statistical inference in doubly point processes.

stochastic

— Bruce Smith (1988): The neurophysiological quanta! hypothesis. — Ebby Kimani (1989): Statistical modeling in pestmanagement: mulation of a mosquito control model.

for-

— Xiaobao Wang (1991): On the estimation of trigonometric and related signals. — Thomas Scheike (1993): Statistical analysis of tesselations and nonparametric kernel regression with biological applications. — Wajih Alaiyan (1993): Statistical aspects of the evoked response technique. — Mark Rizzardi (1993): She loves me, she loves me not: over an ordinal-valued time series of tropical Sowers.

pondering

22

JOHN W I L D E R T U K E Y

John A. Hartigan's students: — Norman Johnson (1974): The effect of population skewness on confidence intervals determined from mean-like statistics. — Christine Waternaux (1975): Asymptotic roots for a nonnormal population.

distribution of the sample

— Gerard E. Dallal (1975): Simple interaction models for two-dimensional contingency tables. — William F. Eddy (1976): Optimum kernel estimators of the mode. — Gokul D. Ghia (1976): Truncated generalized Bayes tests. — Takashi Miyawaki (1976): Mixture of two distributions: a Bayesian rejection of outliers. — Yohanan Wax (1976): The adjusted covariance regression estimate. — Lorraine DeRobertis (1978): The use of partial prior knowledge in Bayesian inference. — Adnan M. Awad (1978): A martingale approach to the normality of posterior distributions.

asymptotic

— M. Anthony Wong (1979): Hybrid clustering. — Jeffrey S. SimonofF (1980): A penalty function approach to smoothing large sparse contingency tables. — Gary W. Oehlert (1981): Estimating the mean of a positive random variabie. — Daniel B. Ramey (1982): A non-parametric test of bimodality with applications to cluster analysis. — Daniel G. Barry (1983): Non-parametric Bayesian regression. — Antonio Possolo (1983): Spatial point processes. — Siu-tong Au (1984): Estimation of a change-point. — Edward G. Carlstein (1984): Asymptotic normality and variance estimation for a general statistic from a stationary process. — John V. Fox (1986): Probability models on the sphere for genetic fate mapping. — Albyn C. Jones (1986): A stochastic analysis of the propagation of error in Boating point calculations.

23

O P E N I N G MATERIAL

— Atsuyuki Kogure (1986): Optimal cells for a histogram. — Robert M. Brunell (1988): Robust estimation using sample spacings. — Michael D. Escobar (1988): Estimating the means of several normal populations by nonparametric estimation of the distribution of the means. — Yuichiro Kanazawa (1988): An optimal variable cell histogram based on the sample spacings. — Ruth Daniel (1989): Probability models for hominoid molecular evolution. — Anna Nicolaou (1990): Confidence intervals for a scalar parameter of interest in the presence of nuisance parameters. — Surya Mohanty (1991): Detecting biomodality using the minimal spanning tree. — Evelyn Crowley (1992): Estimation of clustered parameters. — David Riceman (1992): An estimator for the linear model. — Gregory P.M. Rozal (1993): Exploring features of multivariate distributions using constrained spanning trees. — Zhiwei Ma (1994): Resampling a stationary time series. — Amy Kiuchi (1994): Predicting progression to Aids using change points in the series of T4 Counts. — Zhong-xin Zhang (1994): Discrete noninformative priors. — Tom Kelleher (1995): Admissibility of Test Based

Estimators.

— Bruce Rannala (1995): Demography and Genetic Structure in Island Populations. Morton B. Brown's students: — Camil Fuchs (1976): On test sizes in linear models for transformed variables. — Emmanuel T. Jolayemi (1982): A Cp statistic to select a log-linear model. — Judith Bromberg (1984): Modified estimators in log-linear models. — Barry P. Katz (1984): Detection of a random alteration in a multivariate observation based on knowledge of probable direction.

24

JOHN WILDER TUKEY

— Phyllis Gimotty (1984): Goodness-of-βί chi-square tests using im­ puted data. — Robert H. Kushler (1987): Models and methods for hormone pulse analysis. — Bryce Landenberger (1989): Methods of incorporating litter size in the analysis of teratology data. — Tae-Sung Park (1990): Estimation of nonignorable nonresponse mo­ dels for categorical data. — Stephen P. Schmaltz (1991): Inverse nonlinear regression in the pre­ sence of measurement error. — Robert G. Bagin (1992): The estimation of marginal dose response from joint toxicity information. — Wen-Miin Liang (1995): A median-based test to compare when there is informative dropout. — David T. Mauger (1995): A new method for characterizing pulses in a time series.

treatments hormone

— Mei-Feng Huang (1995): Nonignorable nonresponse model for longi­ tudinal categorical data. James R. Thompson's students: — Gilbert de Montricher (codirected with R.A. Tapia) (1972): Nonparametric bayesian estimation of probability densities by function space techniques. — John Bennett (codirected with R.J.P. de Fiqueiredo) (1973): A pat­ tern recognition technique for remote sensing. — James A. Hokanson (codirected with Barry W. Brown) (1975): (UTGSBS) A simulation study of myeloma growth and treatment. — David Scott (codirected with R.A. Tapia) (1975): Discrete maximum penalized likelihood density estimation. — Melvyn Smith (1980): An epidemiological model for the control of gonorrhea. — E. Neely Atkinson (1981): An epidemiological examination of the effects of asbestos on lung function. — Richard Hathaway (codirected with J.E. Dennis) (1982): The EM al­ gorithm for the maximum likelihood estimation of gaussian mixtures.

25

O P E N I N G MATERIAL

— Steven B. Boswell (1983): Nonparametric mode estimation for higher dimensional densities. — Roland P. Sanchez (1990): A nonparametric regression algorithm for time series forecasting applied to daily maximum urban ozone concentrations. — R. Webster West (1995): Modeling the potential impact of HIV on the spread of tuberculosis in the United States. — Marc N. Elliott (1995): An automatic algorithm for the estimation of mode location and numerosity in general multidimensional data. — Mark Overley (1995): A stochastic approach to prepayment ing.

model-

Karen Kafadar's students: — Deborah Leigh Hall (1996): Methodology to determine homology and clustering as applied to intronic regions of regulatory cancer genes and non-regulatory genes. Paul Horn's students: — Amy Fisher (1992): Robust Prediction in a Regression Setting. Stephan Morgenthaler's students: — Michel Donegani (1991): Construction de tests de reechantillonnage adaptatifs. — Annick Clerc Berod (1993): Inference conditionnelle et robuste pour un parametre de ragression. — Christian Posse (1993): Analyse exploratoire de donnees et discrimination a 1 'aide de projection pursuit. — Xavier de Luna (1996): Une approche non-parametrique monieuse de la prevision de series temporelles. — Marc Genton (1996): Robustesse dans !'estimation et du variogramme en geostatistique.

et parci-

l'ajustement

— Enrico Chavez (1997): Test de l'homogeneite des dispersions de k echantillons: approche conditionnelle et bioptimale. Clifford M. Hurvich's students: — Lawrence G. Tatum (1991): High breakdown methods of time series analysis.

A C O N V E R S A T I O N WITH J O H N W.

TUKEY

This conversation with John W. Tukey took place on June 20, 1995 at Princeton University's Jadwin Hall. The questions were asked by Luisa T. Fernholz, Stephan Morgenthaler and others among the public present. The conversation was taped and what follows is a typescripted and slightly edited version of these tapes.

John W. Tukey's High School and College Days Q: I am going to start with a somewhat personal question. We heard yesterday that you did not have a formal education, but were educated at home. Could you tell us a little bit about that? A: Okay, well, by the time I was five, my parents had settled in New Bedford. My father was head of the Latin Department in the High School. In those unregenerate days a married woman couldn't be a teacher in Massachussetts. So, my mother wasn't a teacher, but she was a substitute. And I have heard it claimed, that between the two of them, they ended up teaching everything in this high school, except bookkeeping and physical education. I think you have to add chemistry to that. And rumour says that my mother decided that it would be bad for me to go to school because, either I would get very lazy, or I'd be a problem, or something. And so, there wasn't too much formal education. But I spent a lot of time in the public library. New Bedford had a wonderful public library in those days. Not only did it have the Journal of the American Chemical Society, but it had the Transactions of the American Mathematical Society. And I think the reason that I went to Brown as a chemist was because I could read the JACS, but I couldn't read the Transactions. Q: When you went to college did you regret being brought up in this isolated environment? A: It wasn't that isolated; in the sense that I am not sure that the environment was more isolated than if I had gone to the high school. I actually went to the high school for one term in French and some mechanical drawing. I am not sure, if I am the last person to enter Brown with credit for mechanical drawing or not. Q: How did you enter Brown? A: College board exams. And so I went in and ended up with advanced credits in Mathematics and, I guess, German. So, I went to junior differen26

O P E N I N G MATERIAL

27

tial equations as my freshman math course. We had a cousin who was the head of the mathematics department in the high school. But, again there wasn't too much formality. But I worked lots of problems in a calculus book, and that seemed to produce the necessary effects. Q: You did your Ph.D. at Princeton in mathematics. Tell us about that. A: I came to Princeton in '37 as a graduate student in chemistry, and ended up being a lab assistant in one of the freshman inorganic courses. In Princeton you had to be a Ph.D. to be a lab assistant in physical chemistry, which worried me a little because I had been a lab assistant in physical chemistry for a year and a half at Brown. But anyway, I fell over the fence the summer before I came to Princeton. I came in chemistry, but I spent a lot more time in old Fine Hall than in Prick; and I took prelims at the end of that year. Q: Harking back to Brown once more. Do you remember a particular Professor or course you liked? A: I don't think there was one that was an obvious dominant influence or anything of that sort. I knew most of the professors in the mathematics department and most of the professors in the chemistry department. I was there four years and, at one point, I was going to take a master's degree and an ScB. at the end of the four years. The department didn't like giving two degrees at the same time and so they decided to give me an ScB. after three years. But, W.A. Noyes, Jr. who was later the editor at JACS, used to claim that he did the glass blowing for my thesis experiments in my fourth year. I was well enough tuned in on the scuttlebutt in the math department. There was a lady graduate student who went to Illinois and the word came back that she was going to marry Pierce Ketchum who was a professor there. And so, I know who it was who proposed (in the math department) to send a telegram reading 'Congratulations, you Ketchum'. So, I didn't feel isolated! But, probably among the mathematicians, it was Tamarkin from whom I took a graduate course in the second, or third year; he was sort of the senior research man in Math at that point. Among the chemists — the course work — I don't think I have anything to say. I spent time with the physicists (and geologists) too. Bruce Lindsay, who was then chairman of the physics department, had come from New Bedford originally. I don't think I have a better answer for you. Q: When you went to Princeton as a graduate student, did you take courses, or did you immediately start on your research? A: No, no! I went to lots of courses and seminars in Fine and to the chemistry courses that a graduate student (first year) might reasonably be supposed to go to. Now, if I hadn't already fallen over the fence, I

28

JOHN W I L D E R T U K E Y

don't know. Henry Eyring was here in those days. Henry was a theoretical chemist who was the salt of the salt of the earth. And if I hadn't been so far over the fence and if Henry hadn't been away for a semester I might have stayed in chemistry a little longer. But in those days the Institute for Advanced Study and the math department were all mixed together in old Fine and people sort of didn't segregate, either in lectures, or in turning up for tea. Along in the spring of that first year when I was still a chemist, Marston Morse asked me whether I was at the University or at the Institute. Since I was still a chemist, I thought that was interesting. But what had been going on is that Norman Steenrod and I had been sitting in the two back corner seats in old Fine 113 and doing our best to keep Marston honest in the seminar he was giving. Q: Jn general topology I learned about Tukey's lemma and I know that you have made other important contributions to mathematics. I was wondering, how your transition from Mathematics into Statistics happened? A: Practice! May '411 went to work for Merrill Flood in Fire Control Research — boom-boom fire control, not hose fire control. Charlie Winsor was there, and except for the year and a half that Charlie wasted in Washington in the Mine Warfare Research group, Charlie and I spent the war pretty much either in the same office or across the hall. Charlie knew an awful lot of statistics that wasn't in the books then, and I am sure a certain amount that isn't in the books yet. So I learned by talking to Charlie and by doing things and by reading. Q: Going back to your days as a graduate student in mathematics. Who were the professors among the mathematicians at Princeton whom you remember most? A: Again I am not sure that one can pick individuals. I quote you a verse from the faculty song. Here's to Lefschetz, Solomon, L. unpredictable as hell, when, laid at last beneath the sod, then he'll begin to heckle God. That was what the Fine Hall verse that senior graduate students, probably years before, produced. That was a private verse for the faculty song. So, Lefschetz is one, Bohnenblust is another because he was a clear lecturer, Steenrod was on the faculty at that point, but we interacted lively, Tucker. Although there were a lot more contacts in Tucker's case during the war than before. I don't know just what to say. Q: What about life in the graduate college? Do you remember any particular incidents? A : The situation in the graduate college was that fairly soon I joined a group of people who ate together. It was a continuation of a group that had eaten together the year before. This was before Hitler's invasions

O P E N I N G MATERIAL

29

so it is perhaps not surprising that it was known as the Fuhrocracy and Lyman Spitzer, who just recently retired from across the street here, in Astrophysics, was officially the Fuehrer. He sat at the end of the table and if spare ice-creams needed to be divided it was his responsibility to divide them fairly. But there was a physicist or two, a couple of astrophysicists, one theoretical chemist, several mathematicians, and one romance-linguist, who was a courtesy member of the group, but who had been authorized to put anybody into a Klein bottle that he wanted to. But, Ralph Boas, who was a national research postdoc, and Frank Smithies who was over for a year from Cambridge, and I ended up hanging out together a fair amount. So, we were just three people who sat through Aurel Wintner's lectures on convolutions and so when the Princeton notes came out we were the note writers, the only other person was Cyrus McDuffie - and so, the Library of Congress card to this reads, notes by Ralph Boas, Frank Smithies, John Tukey with sympathetic encouragement from Cyrus C. McDuffie. And when the seminar came to the end at the end of spring, McDuffie packed us all into his car and the whole group went up to North Jersey for the day. Again, there was no shortage of interaction. I am not sure which year it was, it must have been the next year. Arthur Stone bought some paper for his ring notebook at Woolworth's and since he had a British notebook, he had to cut an inch off it. Since he had all these strips of paper, he had to do something interesting with them. So, he was folding the regular polygons and he was smart enough to recognize the first known hexaflexagon when it was made. So that Arthur and Brian Tuckerman and Dick Feynman and I spent a fair amount of time on fiexagon theory. The asymptotic formula is known for the number of different hexaflexagons with η sides, for example. It turns out to be equivalent to the problem of triangulating a regular polygon, etc. Q: Who was your advisor? A : I think Lefschetz. You know, Peisakoff played the version in the oppo­ site direction. I think I was supposed to be his advisor, but I was not at all convinced. Q: I was going to ask you whether you advised any math Ph.D. students after starting to work as an instructor. A : Not that I can specifically think of. But I was really only two years there. In May 1941 I pulled out across Nassau street.

30

JOHN WILDER TUKEY

The War Years Q: Can you tell us a little bit about the work in the fire control group during the war. A: It started out as a project to study the training of height and range finder operators. Do you know what a stereoscopic range finder is? It is a situation where you see the field differently in the two eyes and there's a reference mark and you can try to make the reference mark appear at the same distance as the target. Details are not that important for the present purposes. And it was mainly stereoscopic height finders, i.e. they had automatic conversion of range to height for antiaircraft fire and then all the big naval guns used stereoscopic range finders, because most of the fire against naval targets was against targets that you could see at least the mast of. Particularly if you put the range finder high in the other ship's mast. And Brock McMillan, who was I guess a year or so ahead of me as a postdoc, ended up with other people at Fortress Monroe running a field laboratory and the group in Princeton was a combination of true target position and analysis. The point is, if you're going to test height finders and height finder observers, you have to have some way to know how high the target really is. And so there were recording phototheodolites which somebody tried to keep pointing at the target. Where the image showed in the frame you could correct and get a good idea of what the angle of the target was. Then we had one of the first IBM multiplying punches and so we actually got IBM calculations of what the true heights actually were. But, there was a lot of physiology and fairly soon there was more to do with the hardware. Why there were temperature errors? We pioneered filling the height finder with helium instead of air, the point being the thermal conductivity of helium is about seven times that of air. So the temperature gradients inside the instrument were much smaller if you filled it with helium. And this broadened to get into armoured vehicle fire control. We had various interactions with Frankford Arsenal and eventually we were, for a while, funded out of there. We had some civil service personnel here at 20 Nassau and they couldn't be told what to do by anybody but a government employee. So, a couple of us became parttime technical experts so we could supervise the civil service people. Then Colonel Trichel moved to Washington and we ended up getting involved in testing rocket powder (because he took over the corresponding section of the Office, Chief of Ordnance and then later on we came back to NDRC because there was this project (AC-92) which was trying to do all the fixes on the B29 as an operational device that they could. And, we ended up being the coordinating group and, as I say, that's when I learned to

OPENING MATERIAL

31

ride airplanes. The Mt. Wilson observatory people were hanging up little models of aircrafts with light shining out of them to see what the defensive fire coverage really was for different formations, and there were people in two or three places in Texas, and also something was going on at Smoky Hill Army Air Field in Kansas. Those who've never seen a loaded B29 take off in Kansas probably missed a sight that will never recur again. This was the really flat Kansas and loaded B29s would go down the runway — I think they had somewhere between a 10000 foot and 12000 foot runway — and then they disappeared under the curve of the earth and you wouldn't see them till they were maybe 10 or 20 miles out. With that engine, by the time you got up to somewhere between 100 feet and 200 feet, the temperature on the engine was over red line and you had to flatten out (to ease the load on the engines and let them cool a little) before you could fly up the rest of the way. This is why the airstrips on Tinian and Saipan, and so on, always went to the water's edge to get maximum clearance, except for wave heights. After that job wound up I went to work for Murray Hill (Bell Labs) and was involved in the first paper-and-pencil study for what was called AAGMl and later called NIKE and later called NIKE-AJAX after it started having big brothers. This was an anti-aircraft guided missile. Bernie Holbrook (who was, we finally decided later, something like a ninth-and-a-half cousin of mine), a switching engineer, and I, we did trajectory, aerodynamics and war head for the paper-and-pencil study, both of us being "experts" in all these fields. But on the other hand the state of supersonic aerodynamics was poor even if we got all of the best information. We went to Langley and talked to the people around the wind tunnels and so on. Then, when Don Ling came to work for the Labs somewhat later, he produced a little pink paper — meaning unofficial draft memo — ascribing to me the theorem that if a semi-continuous function had its values known at three points it was well-determined, but if it was a continuous function you only had to know its value at one point. That was about the state of knowledge of supersonic aerodynamics. People were still taking seriously the linearized theory that in particular said that the control surfaces would start to work oppositely when you went through Mach equals root three. Of course, nothing of that sort ever happened. So, I stayed with this. Afterwards I spent a fair amount of my Murray Hill time in connection with NIKE for quite a long time. I got to go on impact parties out at the White Sands Proving Grounds, which meant going around through the boondocks and seeing if you can find the pieces that came down.

32

JOHN W I L D E R T U K E Y

Statistics in the Forties and Fifties Q: Coming back to your statistics education. You were turned into a statistician through practice. A: And eating an average of 1.9 meals a day with Charlie Winsor over probably the equivalent of three years. Q: So, he was a major inHuence on you? A: Yes. Q: Did you ever read a statistics book? A: Oh, yes. When I was at Brown I read a lot of miscellaneous books in the math library, including some statistics books. Even back at that stage. I used to have a little tin container with 3 by 5 cards in it that had interesting looking tables of critical values for statistics. That didn't mean I had any feel for them. But on the other hand I am one of the — I suspect not inconsiderable number — who taught a graduate course in statistics before he ever sat in a course of statistics. Q: You took no course from Sam Wilks? A: No course of this sort, no. But I saw the books and looked through them. Sam and I did a little joint work. Q: Around 1945 through 1950 could you describe the statistical community in the US. A : Well, I'll just have to try to isolate that period by guess. ASA had been meeting yearly since lord only knows when, maybe since the foundation. The Institute I think came into existence in 1938. In those days to join the Institute you had to be proposed by two members. They were worried about keeping the nonmathematical statisticians out for a while. That they gradually recovered from. The Biometrics section of ASA had been in existence and ENAR came into existence probably in late '45. This is the Eastern North American Region of the Biometric Society. I think it would have been late 45 and not 46, one day there was a meeting in Woods Hole to set up the Biometric Society followed by the first meeting of ISI after the war in Washington. And Linder and Fisher and I and someone else shared a sort of a four room suite at the hotel at this (Washington) meeting. But, it was somewhere about this time there some discussion about the vigor of comments in biometrics meetings. We had one of the very good biologists give a talk. And people had asked questions as vigorously as usual. So, there was some question at lunch whether the speakers had been unfairly treated. The outcome of this was that it was agreed that at a biometric meeting one was entitled to ask any question that one felt like. So, the biometric thing was an area of activity. I think more so than mathematical statistics. But mathematical statistics more so than ASA in general. Who were the key figures? After Sam Wilks, Hotelling and WaId, really I think

O P E N I N G MATERIAL

33

you have to count Gertrude Cox and George Snedecor. Although Cox and Snedecor were not research contributing types, they still played a large role. Prom an older generation A.T. Craig in Iowa City and on the coast of course Neyman and the people that he'd drawn together. And fairly soon Bowker at Stanford because of his building-up powers. This isn't an exclusive list. Now, if you took a biometric flavour, you would get a different set of people. You probably have to get Cochran in both sets. People who'd be respected on the biometric side certainly would include our friend at the Mayo Clinic, Joe Berkson, who was a red-headed Irishman. Red was pretty much faded in the hair but not in the spirit. Since he had both an MD and a Ph.D. you couldn't put him down in any obvious manner if you got into an argument. Now, maybe it's relevant that I think I made some comments at about that time that the person that I would be most careful with if they were in the audience and I was giving a paper was Milton Friedman. Because Milton had worked with Allan Wallis at a different Columbia research group during the war. This is the one that produced the sequential analysis things. One day they took Jack Wolfowitz out to lunch and worked him over thoroughly about the importance of doing something with this. A month later when nothing came of that, they took Abraham WaId out to lunch and he came in the next morning with the fundamental identity of sequential analysis. Milton was well acquainted with the statistical side and very sharp. Probably easier to cut yourself on him than anybody else in the Biometric Society or the Institute. Of course Jimmy Savage was well started on his way up by that time. As of '46 if either Jimmy or Fred Mosteller was in a room out of sight talking, no one could tell which one it was. They'd spent so much time working together during the war that they ended up equivalent in accent.

John Tukey's Work for the Federal Government Q: I would like to go on to your work with the government. You were a member of the President's Science Advisory Committee. Do you recall incidents from those meetings? A: Jaa.. I'll give you an anecdote. But let me remark, I don't think that over those years I did any statistics (for PSAC). But one of the earlier environmental reports was being discussed and there were people in from some of the government agencies. And it became clear to them that PSAC was going to have, if anything, kind words to say for Rachel Carson in Silent Spring. And the people from agriculture practically wept in their beer. They didn't think that she should receive any mention or notice whatsoever.

34

JOHN WILDER TUKEY

Q: The FFT got started at a meeting of PSAC? A: That's not quite the story. The FFT's realization was partly influenced by PSAC meetings. I used to sit next to Dick Garwin down at the far end of the table. This was in the room in old State which had once been the secretary's office. This was the room in which the Secretary of State saw the Japanese envoys just before Pearl Harbor. I was sitting there scratching and Dick wanted to know what and I told him what it was about generally. He went back to Watson Lab and he had some things going on that required some large Fourier transforms and so he tried to get Jim Cooley to program this. And after a while Jim did regard it solely as a programming exercise — all the theory was done. I didn't think all the theory was done and this is why there is the Cooley-Tukey paper. It really didn't start there and the initial reference is probably the Princeton Notes on a graduate course in time series, which I think either matched or pre-dated this. And what really happened in due course is that Jim Cooley produced one algorithm and Gordon Sande produced the transposed algorithm. And eventually IBM kicked Jim into publication because they decided they didn't want to try to patent it and they didn't want anybody else to. And somebody fell out to be a co-author and I sort of floated along and didn't take adequate action to see that Gordon got his stuff out, which I always felt bad about. Q: Your work on the PAQAB seems at least have in part led to the award of the Presidential Medal of Science. Could you elaborate on the work you did there? A: I doubt if there was very much connection. That sounds to me like something that got mentioned in a list because it was handy. PAQAB (President's Air Quality Advisory Board) was moderately effective. This was in the first Ruckelshaus era as EPA administrator and Ruckelshaus was still optimistic, feeling that if you talked nicely and informatively to the polluters you could get them to stop polluting. He learned quite rapidly. It was a respectable advisory group but nothing in particular. Presumably, that got cited along with Restoring the Quality of our Environment, a report that was originally a great society task force for (President) Johnson. There were ten such, nine of them reported through one channel in the White House, we reported through another one. Nine of them leaked, one of them didn't. And after things were over, it was decided that this ought to be converted into a report for the general public and this was done under PSAC auspices. So that this is where the Restoring of the Quality of our Environment came from. That was the first all-types-of-pollution moderately-comprehensive report. We beat the Academy group under Athelston Spilhaus by about 6 months. So, PAQAB sounds interesting but is not worth particular attention, by comparison.

O P E N I N G MATERIAL

35

Statistical Education: Theory, Consulting, EDA, etc. Q: What are your views on education? How should we train the future generations of statisticians? A: Now, let's just get the ground rules clear! Are we talking about education of statisticians, education in statistics, or education in general. Q: Let's start with education of graduate students in statistics. How much math, how little math? Does it help, does it hurt? A: I think the answer to that one is: First, there shouldn't be a single answer. Secondly, it's not the whole story. Mathematics didn't hurt me, as I understand it, when one graduate student came to Princeton, the word among the grad students in the graduate college was — here is someone who is never going to get a Ph.D. from Princeton in math. He met the formal requirements all right. But, if he had been pushed too hard in math, something would have had to break. And statistics couldn't have done without having him as a statistician. So the answer is there are places where it can hurt badly. You can't live these days with no math. But neither, I suspect, can the average graduate student live with all the math required to adequately cope with what's in the journals. They talk about practicing defensive medicine. Statisticians may have to practice defensive mathematics. I well remember a remark Charlie Winsor made walking down just in front of old Fine. Charlie said: "Sam Wilks trains good mathematical statisticians and it's surprising how soon they become good statisticians." Now, what I worry about most about the math is well, firstly, the loss of some people who can think well and who can do good things and, secondly, also the deflection of people away from thinking about what they can do much more effectively. I think everybody appreciated Paul Velleman's speech yesterday. Paul came to Princeton from essentially a mathematical sociology program and he managed to survive the necessary mathematics. But, I think it was survival rather than anything else. So, if, given a choice between turning out proto-statistician B with no mathematics and proto-statistician F with no feeling for analysing data. I think I'd almost rather turn out proto-statistician B although I wouldn't feel it would be fair to him or her, because it would not leave them in the position of practising defensive mathematics after they came out. That would be protecting them and protecting their position among the purists in the field rather than what was needed to do the job. They would feel maybe they need some more mathematics. Now, this is an uncomfortable side of things - it is much easier to teach mathematics, that is to those who will be taught, than it is to teach some of the other things. There are places where it can hurt badly. I may have caricatured my position a little, but not very much.

36

JOHN W I L D E R T U K E Y

Q: How about consulting? Should that be an important part in the training of Ph.D. statisticians? A: If you can do it right. If you have a department where none of the professors have ever done any consulting, then it's — to say the least - dangerous. You can do some of that by interaction. The old Applied Statistics seminar here that I ran for many years which pulled in people from outside to talk about their problems and who were told there were only two axioms for the seminar, (1) they need not know the answers to the problems they talked about and (2) the audience could ask almost any question any time. I think that had many of the virtues of a formal consulting situation as far as the graduate students were concerned. You heard Karen yesterday say that what she was talking about and what she's worked on mainly since she got her degree, came out of a graduate student project. There were a number of graduate students who had a feeling they just didn't have any feel for data and so we got together and we decided to try to do something with some of the cancer atlas data. The better consultants you got on the faculty, the more importance you can afford to give to various sorts of consulting arrangements. Because you'll be able to teach the students not only the feel for data and thinking about data, but also about interacting with people in the consultant's role. And that's almost equally important. Q: What is your present view on EDA? How should it be taught, to whom should it be taught? A: Well, as some of you know — most of you don't — in principle, work is going forward for a second edition. And, the main thing that will happen is some of the more unnecessarily complicated things in EDA will go out and be replaced by simpler ones. There'll also be a few things that have come along since then and need to be added. How should it be taught? I guess my only answer to that is "Whatever way a really interested teacher wants to teach it!" Who should it be taught to? In one extreme, David Hoaglin taught it in a graduate course at Harvard for a while. He was thinking of teaching it for a quarter and the students wanted more. Charlie Smith's mother tried some teaching in high school with it. To whom it should be taught? I think, anybody who is willing to stand for it. There is a famous example that I can't report in complete detail because I've forgotten some of them of somebody at Chicago who really couldn't stand it and ended up by starting statistics courses in three different divisions of the university at three different levels from undergraduate to graduate and kept dropping them because they started

O P E N I N G MATERIAL

37

to teach EDA. There are people like that — not very many we hope — it doesn't pay to teach it to them. But, if you're going to teach it to people who have a statistical background that's more difficult then teaching it to people without. But they are entitled to get more supplementary material and some indication of how things lock together or do not lock together. Q: In your case, a second important part of your education was chemistry. A: First! I have as many chemistry degrees as pure mathematics and none in Statistics. Q: Do you think it is a good idea to have an undergraduate degree in statistics? A: It seems to me the places who do this and take it seriously do fairly well. Have you ever seen the paper "The Education of a Scientific Generalist?" This represents an optimistic view of the kind of diverse education that it might pay to give some people who wanted to head for scientific generalist. Those who haven't heard of the paper should know that the four authors are Hendrik Bode who was at Bell Labs and was the man who had a lot to do with feedback technology, and Mosteller and Charlie Winsor. We didn't have any great difficulty agreeing on something to put down as a proposal. But I think the answer to this one depends on what you've got in the way of an academic organization to fit into. Doing this in a mathematics department, at any place, Yale, Princeton, Harvard, Brown, can only — I think — be described as totally unfeasible. If you have an interested statistics department and people in other departments who are willing to at least be useful contact points and so on, the situation is very different. If there was no reason for diversity the University of North Carolina would not require three statistics departments which is sort of historically what they had. I don't know the institution as an establishment at all, but I guess with much less than three you couldn't have covered the waterfront as well as you did — you probably didn't cover it far enough anyway. It's the practicality of fitting into the establishment that controls that one, not otherwise. Q: How would you organise a statistics department? A: Do the best you can fitting into the establishment. Probably straining the establishment a little, but not too much. A very delicate operation. As I understand things, Don Rubin at Harvard has actually made enough contact over enough time with the economists that really at that particular institution the economics - statistics gap is maybe almost gone. That's a thing that many other institutions — I think — would like to copy, if they knew how.

38

JOHN WILDER TUKEY

Qualifying Exam Questions Q: If X is a Poisson random variable with expectation 2. What is its median? A : I have difficulty inventing a third answer. There are at least two re­ spectable answers. The strictly formal answer that says which jump in the cumulative includes ρ = 1/2. That's formally correct, but not very helpful. Now, let's see. I have to do a small amount of mental calculation. If you like to use halves, the cumulative at 1 1/2 is clearly less than 1/2 and the cumulative at 2 1/2 is bigger. And so, I would plot the cumulative at 1 1/2 and the cumulative at 2 1/2 and draw a straight line connecting those and say where that line cuts cumulative equal 1/2 is a respectable definition of a median for this situation. Q: The answer is two. What's the answer when the expectation is ten? A: Well, one would have to compute, wouldn't one. Whichever definition we use, the question is: do I have to compute one value of the cumulative of the Poisson or two values of the cumulative of the Poisson? (You would have to compute two, yes.) To be sure, probably you compute two and if you compute two, then you can do the interpolation. So, the computational load for the two definitions is approximately the same. I am not that well acquainted with what it is you really want to look at here — the percentage points of a Chi-squared and use Wilson-Hilferty? Can I have a show of hands of how many people know Wilson-Hilferty as such? Thirty percent. Q: You don't need to use Wilson-Hilferty. A : You mean μ = 0.667 is close enough? Q: JVo, the answer is ten. A: You mean the first answer is ten. The second answer is probably 10 and a bit, but I don't know. Let me indoctrinate the audience on WilsonHilferty. Wilson-Hilferty says this is the other E.B. Wilson, Edwin Bidwell Wilson and a lady named Hilferty, about 1920 — that to get a respectable percentage point of Chi-squared you take ν times the cube of the expression "1 — 2/(9^) plus the corresponding normal deviate times the square root of 2/(9^)". This is remarkably good for ν from roughly say, 2 up. You don't need it for one, because you can get the percentage points for chi-squared on one out of the Gaussian table. But this is one that every statistician ought to have in their back pocket. Q: When testing for significance for contrasts under conditions of mul­ tiplicity, we now can control the "false discovery proportion" instead of the family-wise error rate, thereby increasing power and rendering find­ ings relatively invariant over changes in family size. With large families,

O P E N I N G MATERIAL

39

the advantages are considerable. Two questions. How should we most constructively think about how to extend such advantages to the establishment of confidence intervals or what impediments stand in the way of such extensions? A: How many people — I am going to continue this polling process — how many people know about the Benjamini and Hochberg stuff? Running twenty percent maybe, o.k! Those who look at big multiple comparison situations, I think, predominantly feel that 5% simultaneous is being too stiff and that 5% individual is being too darn loose. Now, long ago I suggested using the average of the two guides as an intermediate significant difference, I never pushed it very much and I doubt if I could have sold it real well. Last year or two, Yoav Benjamini and Yosef Hochberg, both in Israel, have been working on the idea of controlling the fraction of the positive statements that you make that are wrong. Now, what this means, you see, is if you have things being compared and there are going to be very few significances, no matter how assessed, the FDR is going to be very much like the individual rate because you going to make maybe one positive statement on average. You are entitled to make, by this formal thing we have to correct in a moment, a twentieth of one. On the other hand, if you have a point, a point, a point and the standard error is about this much (using hand), there is only one difference left that you haven't cornered. Then this thing (the FDR) is going to tell you, you are going to behave very much like the individual thing, and that's fair enough, it's really simultaneous on one. It does seem to have good properties. Some of us, LyIe (Jones) included, believe that the first place you come to on this, is a matter of direction only, that is, the positive statements that you might make are that they differ in this direction or they differ in that direction. But the statement that they nearly differ is silly, because they all differ in some decimal place anyway. And if you talk about directions, then when you are getting very few definite ones, half of them have to be wrong because clearly these are things that have come about from small differences. That's oversimplified heuristics, but roughly right. So, you have to fix up the game somehow. The two Y's like to fix it up by saying that if you get zero over zero you count that as one full case of zero. I like to fix it up by saying if I'm doing things at 5 percent, I'm entitled to make two and a half percent as many false positives as one plus the number of positives. You have to seed things somehow at the beginning and there's a choice on how you do this. If Stephan will read the first part of this question again, I will tell you why I think it is not adequately formulated. Q: How should we most constructively think about how to extend such

40

JOHN W I L D E R T U K E Y

advantages to the establishment of confidence intervals? A: The point is that confidence intervals are not confined - the positive confidence intervals are not that different from negative confidence intervals. And, my guess is — and I haven't thought this through long enough — but my guess is you end up using simultaneous things for the confidence intervals anyway. I doubt that a very hybridaceous thing that says, well we aren't going to make a positive confidence interval statement erroneously more than two and a half percent of the time we make positive confidence interval statements and negative confidence interval statements we don't want to be wrong more than two and a half experiments out of a hundred. That sort of hybrid thing that says: "once the statement ceases to be positive, you change the bases on which you evaluate suddenly". — I don't see how to make that one fly any more. I think I have a suspicion it's never going to fly. So, I don't see anything wrong in principle with using the false discovery proportion for directionality and the simultaneous calculation for the confidence intervals. The fact that I will get some upward directional statements where the confidence interval includes values less than zero doesn't bother me very much. I don't have to have a seamless connection between the two. I offered an extension that is viable and the impediment that stands in its way is it's not as logically seamless as you might like. But I think it's better than any seamless one I see. Directional statements versus "I'm uncertain about direction" is a very great difference. A confidence interval that just doesn't quite cover zero and one that just does cover zero are very much nearly the same thing. They ought not to be connected in a seamless way.

Arguing the Fiducial Argument with R. A. Fisher Q: In the correspondence that you had with R.A. Fisher about the Gducial argument it ends very suddenly. You offered some counter-examples and he said you were foolish. Then you offered a different counter-example and he said something else that was sort of rude and then you said you were going to England to visit him and that was the last was in that correspondence. I was wondering if you could finish the story. A: Well, I was talking with Sir Ronald in his office. And I think roughly what I said was that I didn't see the logical strength of the fiducial argument but I gave a lot of weight to the fact that he thought it was a good idea. At which point he did his best to show me the door. Since Elizabeth was out in the garden in the other direction, talking to one of the daughters, I didn't get shown the door. So he grabbed his hat and his cane and went toodling out the door himself. I would say that was the end of the correspondence.

41

OPENING MATERIAL

Q: So, we never got to the bottom of the fiducial argument then? A: No, we got to a place where it was — I think — mutually felt that further debate between these two parties would not get us any deeper. Now, whether we got to the bottom at that point or two different bottoms or what-not I leave that for other people to judge.

Statistics 411 Q: We heard a lot in this session and yesterday about a course Stats 411 for undergraduates. Can you explain what this course was, what sort of topics were involved and what happened to the notes? A: Well, I'll answer in reverse order in part. The notes take up about a filing case upstairs. That's what happened to the notes. I wouldn't be surprised if you could get a better answer from some of the people who took the course than you can from me. As to what was in it and so on. Who wants to volunteer? Q: I've got part of an answer. I taught 411 two years ago and prior to doing that I was suitably humbled, I asked John if I could borrow a copy of the notes and he did. He sent me — not all of it — but a subset which I'll be happy to share if its all right with John — and I read them and they're fascinating. But there wasn 't a chance in the world that the students that I knew would be able to understand it — at least with me as a teacher. So, we did other topics. This was a course for seniors in engineering. And it tends to be the last course they take. A: This is not an adequate description of the situation when the course was being given by me. This was a course — as somebody had said around here — for seniors majoring in the department, graduate students, etc., etc. and strays from all sorts of places. Henry Braun said he took the course for six years in a row while he was a faculty member. The one thing that I always used to do was to get the seniors to sit up front at the table and make it clear I was going to answer their questions before I answered anybody else's. Otherwise I think the moral pressures on the students would have been bad. What it was, was an attempt to start at the beginning and be serious about it. Not serious mathematically, but serious statistically. That might mean a couple of weeks talking about one-sample questions, taking the view that the standard assumptions are almost guaranteed never to be the truth. So, you want to understand, what happens when they're not true. Does this accord with your readings, Howard? Audience: I think, I tried to do that.

42

JOHN W I L D E R T U K E Y

A: I think Stephan certainly sat through this course at least once. Audience: Twice, I think. The course was like this. John came in with a Ziploc, the lectures were nicely parcelled out, the course was in topics, numbered, and the numbering varied from year to year. So there were several numbers for each topic. A topic could be 14N but at the same time also topic 9 for the course given in 1979. Clearly the course content evolved quite a bit during the years. It was well-structured. It started out with single sample — single batch — questions. How do you estimate location? How do you estimate scale? What do you do, if you have several of them? It went on I think — in the years I took it — up to AJVOVA. Audience: It included (%p,w)-technology, (g,h)-technology, orstats, gaps, simultaneous conndence, lots of things.

The Future of Data Analysis Including Statistics Q: What do you see as the future of industrial

statisticians?

A: I never was on the industrial firing line, in any real sense anyway. The more difficult question — I think — is how much better will people making greater use of statisticians fare in the industrial competition than those who make lesser use of statisticians? Again, this is an establishment question. Some years ago, there was a period of half a dozen years when IBM didn't talk to BTL because the lawyers were feuding over the patent agreement. And after that relaxed, there was a delegation of brass from the IBM research laboratory who came to Murray Hill, and a detachment (from BTL) that went up there. I think the two largest surprises - because you see, at this point you had people in the range of administrative levels that would get involved, who wouldn't have been involved seven or eight years ago and they really didn't know anything in detail about the other organizations. The thing that struck them the most, I think, was how many statisticians there were at Murray Hill and how few there were up in suburban New York. And this was at a time when some people would have said these were the two best industrial laboratories, or maybe, two of some very small number of the best. So, even in research, things were not uniform — now, I don't think that IBM's difficulties came about from not having more statisticians — it would be nice to think that, but there were other reasons. They were generally a stick-in-the-mud outfit as far as their computers went. And they were shocked when they first realised they were spending more money on software development than on hardware development. If industrial statisticians are going to flourish, they are going to have to do different things than they used to. Industrial arithmeticians

O P E N I N G MATERIAL

43

are not going to get hired as such. Arithmetic is a well-stabilized field and to the extent that it's needed, people pick it up. But, statistics and data analysis isn't at the moment a well-stabilized field, and hasn't been, and there can be a particular need for having people who are reasonably upto-date and are growing forward and in various directions. Whether this means TQM or not, I don't know. George isn't here, we cannot put the bite on him and see what he had to say. I think, there's a strong future if they adapt well enough to the changing needs and I just wonder as a whole, how many of them have been changing at all. The industrial statisticians, in double red quotes, that we had here these two days include Colin who is talking about the kind of experimental design that would drive a classical experimental designer up the wall and John Chambers who was talking about the origins of S and where it might go and whether it might accommodate EDA. Times are changing and if the industrial statisticians don't change, other people will do the change — maybe called something else. Remember the early days of cathode ray tubes — TV display tubes — the net result of the situation was to invent a new profession, called shrinkage analysts. Because at that time about three percent of the things that started out to be TV display tubes came out at the other end of the process as satisfactory finished ones. So, what you had to analyse was how things shrank as you went along. Now, the fact that they got to be shrinkage analysts says that the industrial statisticians either weren't there or that the industrial statisticians at the time didn't see the need to think differently. I see Stu in the background there. And about the only thing we are willing to fight about publicly is Taguchi, I think. I regard much of what Taguchi said as overexpressed, unwarranted language. But it is not clear to me that in the early stages of fixing processes you need the degree of security that comes with classical experimental design. And that if you operate a la Taguchi you may get most of the gold at the grass roots and after that, if you don't go and do something better, it's just too darn bad. But that doesn't leave me feeling bad. Now, I don't have to deal with these people who learn Taguchi as a watch word and don't know anything about him. Those who do, have my deep sympathy. Q: I tell people, John, that they should listen to what the gentleman has to say philosophically and avoid his technology. A: What I'm saying is maybe the opposite. Don't believe what he says about the security of his results but in early days the technology may be a good one. Maybe — I don't know. I have not been on the firing line in this direction.

44

JOHN WILDER TUKEY

Q: It's like pulling weeds. A: Well, I pull weeds. Q: My question relates to a question I asked you in 1961. A: Well, I hope I won't give the same answer. Q: The question at that time was — I was starting out in statistics - I asked you what I should read. You told me, the early Journal of the Royal Statistical Society, Supplement and Discussion. I found this a very important learning experience and have also been using it in teaching. I was wondering if a young person came to you today, what would you tell him to start reading? A : I guess you have to start at the same place, because to start anywhere else, you assume that they are a lot further along than you are when you start. And that one refers also to some of this thing about consulting, because the nearest thing to a surrogate for consulting that I know is to go and read the supplement to JRSS. This is something that only requires a library and it's not going to penetrate nearly as much as experience would, but it's going to penetrate in some of the same directions. There was a reasonably savvy group of people who were in the Industrial Application section of the RSS in those days and the agricultural departments, and a lot of this does come through, as you and I both know. Q: How many Ph.D.. students did you have over the years? A : Stephan you have been looking at lists, maybe you know? Audience: ί don't know. Audience: Eileen said last night, that you had over forty. Q: What have been some of your greatest satisfactions and regrets over the years? A : I've avoided the classification. Q: What do you see as the future of statistics in Princeton? A : (Laughter) I have various sized crystal balls, but none of them big enough for this. We're in a time of academic retrenchment. We're pre­ sumably going to lose some good statistical departments in some places. If there were sufficient numbers of analogs of me, so that teaching things could be tried out and enough books written, then... There's a basic dif­ ficulty which Dick Link would present as saying that a good statistician must be a schizophrenic, because he has to deal with uncertainty and the measurement of uncertainty — that's his main task — and to do this us­ ing the most certain tool we have which is mathematics. He has to bridge across the gap here. I don't see any reason to believe that statistics in a mathematics department is going to be other than a hard and dangerous life. On the other hand, statistics as a separate entity is going to have a

O P E N I N G MATERIAL

45

different set of reasons for being a hard and dangerous life. Paul Velleman was arguing about statistics being a science. I would tend to think it would have been more accurate to say science-and-technology. We face the facts, but the academic technologists have by and large ceased to teach engineering. And the way academic society is organized it's not clear how a pure technology acquires intellectual stature, except through individuals. But for my money, statistics, along the lines of the Mosteller L· Tukey review in the psychology handbook, data analysis — statistics, is a pure technology. In physics, theoretical physics draws the attention of most of the undergraduate students or most of the graduate students. Now, that's maybe a good thing in a backhanded sort of way. Somebody was telling me yesterday that they're making 1400 physics Ph.D.'s a year in this country and there aren't going to be that many who do theoretical physics. On the other hand, theoretical physicists, it's been well established, can be converted to almost anything. Experimentalists probably can't. So maybe it's good that theoretical physics attracts the crowd. The point is though that physics has had an old well-established intellectual reputation. And I don't think we'd get away with training 1400 Ph.D.'s a year in statistics and have most of them leave and go into all sorts of other things. I don't think that would be as acceptable as it is for theoretical physicists. There are some internal contradictions at very high levels. If you wanted to ask me where I think I failed the profession most, it would be in the direction of (not) doing something about this.

ELIZABETH TUKEY'S

SPEECH

The dinner celebrating John's 80th birthday was an extraordinary festivity held on June 19, 1995 at Prospect House (Princeton University) with the participation of about a hundred of John's friends, students, and colleagues. Everyone present expressed their sincere best wishes to John, and many communicated their gratitude with a short speech. The wealth of anecdotes gave tribute to the remarkable dimension of John's career, life, and personality. The following speech was written by Elizabeth Tukey for this special occasion.

I have a confession to make. I asked if I might introduce one of John's oldest and best friends tonight - mainly because I want all our friends to understand what an influence he has been in John's life. John just met Bill Baker at the Princeton Graduate College in the academic year 1937-38. John was still a chemist, and Bill, also a chemist, had entered Princeton a year earlier. Bill got his degree in Chemistry in 1938, and John got his in Mathematics in 1939. The war intervened but by February of 1945 John had taken a full time job with Bell Labs where Bill was then an Executive Director. In September of that same year Sam Wilks asked John to help him at Princeton so John returned on a part time basis. John and Bill were both at Bell Labs for the next thirty five years until Bill retired in 1980 from his position as Chairman of the Board of the Labs. John retired in 1985 as Associate Executive Director, Research. During the war John had been heavily involved in defense projects. I have often said, to a few intimate friends, that most people have never realized that John had a third job - that of giving his expertise to an array of U. S. government agencies. For example, he was sent by the State Department to the Nuclear Test Ban Talks in Geneva in 1959 as a member of the Technical Working Group; and, also by the State Department, to the International Conference in the Human Environment in Stockholm in the early 1970's. But Bill Baker had managed to involve John in Washington projects long before that. Bill was a behind the scenes advisor to the government not only on scientific policy, but also on persons who could contribute their expertise to help formulate and promote the policies. Having studied and worked in the field of personnel management, I know that the better you are at picking the right person for the right job, the less it shows to the untrained eye. It all just seems to happen fortuitously. Bill's seconding of John to all sorts of jobs was both masterful and astute. It 46

O P E N I N G MATERIAL

47

suited John's taste for interesting problems to solve and very interesting people, with a great diversity of talents, to associate with. In the same way that Bill understood John's temperament and abilities, so too did he understand the other people who worked at the Labs. I know I was absolutely amazed when I learned that professional people at Bell Labs were not hired on the basis of how closely their training and experience fit the job description of the vacancy. No, the people were hired on the basis of their individual quality and the job description modified to suit their qualifications. When the history of science and engineering in the 20th century is written, Bell Labs will hold an outstanding and unique place from which all sorts of seminal ideas and inventions have emerged — a place of which both Bill Baker and John can be very proud. Prom time to time, over the years, I have suggested to John that perhaps he should take seriously one of the many job offers he received; and move from Princeton. His reply was always the same: "Where could I ever find another Bell Labs?" Because we have always lived in Princeton, it has been easy for friends and colleagues to think of John in terms of the University only. So perhaps, tonight I have brought Bell Labs to Princeton and helped to underline the opportunities, resources and recognition the Labs have bestowed on him. In closing, I would like to address a few words to my friend, Frances Baker. As the wife of another dedicated workaholic I understand the selfless love and devotion, accommodation and deprivation required to "keep them on the road." However, (and I know women are now forbidden to talk about this) from the day of our marriage, John and I have been "a team" and thereby feel we have accomplished more than either of us could have done individually. Princeton, June, 1995 Elizabeth Tukey

P R O G R A M OF THE C O N F E R E N C E H O N O R OF J O H N W. HIS 80TH

TUKEY

IN

ON

BIRTHDAY

Jadwin Hall AlO, Princeton University June 19 and 20, 1995 Organized by: Luisa T. Fernholz and Stephan Morgenthaler Monday, June 19

9:00 - 9:20 09:20-10:00 10:00-10:40 10:40-11:00 11:00-11:40 11:40-12:20 12:20-2:00 2:00-2:30 2:30-3:00 3:00-3:20 3:20-4:00 4:00-4:40 5:00-6:40

Chair: Stephan Morgenthaler Opening remarks John Hartigan Colin Mallows break William Cleveland Paul Velleman Lunch Chair: Henry Braun Karen Kafadar George Easton break John Chambers Frank Hampel A discussion with John Tukey (videotape) Tuesday, June 20 Chair: David Hoaglin

09:20-10:00 10:00-10:40 10:40-11:00 11:00-11:30 11:30-12:00 12:00-12:30 12:30-2:00 2:00

David Brillinger David Donoho break Roy Welsch Leo Goodman Frederick Mosteller Lunch A panel discussion with John Tukey (audience participation)

48

L I S T OF P A R T I C I P A N T S A l l i s o n , D a v i d B . · St. Luke's-Roosevelt Hospital, Columbia University, Obesity Research Center, 1111 Amsterdam Ave, NY, NY 10025 dba8@ Columbia, edu A m a r a t u n g a , D h a m m i k a · RWJ Pharmaceutical Research Institute, Route 202, P. O. Box 300, Raritan, NJ 08869 - amaratunga©'alloy.bitnet A n d r e w s , D a v i d F . · University of Toronto, 100 St. George Street, Toronto, ON M5S I A l CANADA - [email protected] A n s c o m b e , F r a n c i s J . L· M r s . · Department of Statistics, Yale University, New Haven, CT 06520 A r e n c i b i a , O r l a n d o · Department of Applied Mathematics, State University of New York, Stony Brook, NY 11794 B a k e r , W i l l i a m O . & M r s . · AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974 B a s f o r d , K a y e E . · Department of Agriculture, The University of Queensland, Brisbane Q l D 4072, Australia - [email protected] B e n s o n , G e o r g e P . · Rutgers University, 81 New Street, MEC Building, Newark, NJ 07102 - [email protected] B e r g u m , J a m e s · Bristol-Myers Squibb, 1 Squibb Drive - P.O.Box 191, New Brunswick, NJ 08903-0191 [email protected] B i n k o w s k i , E d w a r d S. · CUNY-Hunter/ Math. & Stat., Park Avenue at 68th Street, New York, NY 10011 B i t t r i c h , G u s L· M a r y · AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974 - [email protected] B r a u n , H e n r y · Educational Testing Service, Rosedale Road, Princeton, NJ 08541 - [email protected] B r i l l i n g e r , D a v i d R . · Department of Statistics, University of California, Berkeley, CA 94720-3860 - [email protected] B u y s k e , S t e v e · Rutgers University, Statistics Department-Hill Center, New Brunswick, NJ 08903 - [email protected] C a b r e r a , J a v i e r · Rutgers University, Department of Statistics-Hill Center, New Brunswick, NJ 08903 - [email protected] C h a m b e r s , J o h n M . · AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974-2008 - [email protected] C i m i n e r a , J o s e p h L. · 10217 Valley Forge Circle, King of Prussia, PA 19406 C l e v e l a n d , W i l l i a m S. · AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974-2008 - [email protected] D o n o h o , D a v i d L. · Department of Statistics, Stanford University, Stanford, CA 94305 - [email protected] E a s t o n , G e o r g e S. · Graduate School of Business, University of Chicago, 1101 East 58th St., Chicago, IL 60637-1511 - [email protected]

49

50

JOHN WILDER

TUKEY

F a i t h , M y l e s · Obesity Research Center, St. Luke's-Roosevelt Hospital, Columbia University, 1111 Amsterdam Avenue, New York, NY 10025 dba8@columbia. edu F e r n h o l z , R o b e r t · Intech, 1 Palmer Square, Suite 303, Princeton, NJ 08542 - [email protected] F e r n h o l z , L u i s a · Minerva Research Foundation, Princeton, NJ 08540 and Temple University, Philadelphia, PA 19122 - [email protected] F e r n h o l z , D a n · 12 Dogwood Lane, Princeton, NJ 08540 F i e l d , C h r i s t o p h e r · Department of Math. &: Statistics, Dalhousie University, Halifax, NS B3H 3J5, Canada - [email protected] F i l l i b e n , J a m e s J . · National Instit. Stand. & Tech. (NIST), A337 Administration Building, Gaithersburg, MD 20899-0001 - [email protected] F i s h e r , N i c k · Division of Math. & Stat., CSIRO - Locked Bag 17, North Ryde, NSW 2113 - [email protected] F o s s c e c o , S t e w a r t · Bristol-Myers Squibb, P. O. Box 4000, Princeton, NJ 08543-4000 [email protected] F r i e d m a n , H e r m a n P . a n d M r s . · 45 East End Avenue - 4F, New York, NY 10028 [email protected] G o o d a l l , C o l i n R . · Department of Statistics, Pennsylvania State University, University Park, PA 16802 - [email protected] G o o d m a n , L e o A . · Department of Statistics, University of California, Berkeley, CA 94720-4761 - [email protected] G u n n i n g , R o b e r t C . & M r s . · Department of Mathematics, Princeton University, Fine Hall, Princeton, NJ 08544-1000 [email protected]. edu H a m p e l , Frank · Seminar fur Statistik, ETH-Zentrum, 8092 Zurich, Switzerland [email protected] H a r t i g a n , J o h n A . & M r s . · Yale University, Box 2179 - Yale Station, New Haven, CT 06520-2179 - [email protected] H o a g l i n , D a v i d C . · Abt Associates Inc., 55 Wheeler Street, Cambridge, MA 02138-1168 - [email protected] H o a n g , T h u M . · Universite R. Descartes (Paris-V), Laboratoire de Statistique Medicale, 45 rue des Saints-Peres, 75270 Paris Cedex 06, France [email protected] H u b e r , P a u l B . · Merck Research Laboratories, WP44L-214A, West Point, PA 19446 - [email protected] H u n t e r , S t u a r t J . · 503 Lake Drive, Princeton, NJ 08540 [email protected]. edu H u r v i c h , Clifford · Statistics Department, New York University, 44 West 4th Street, New York, NY 10012 - [email protected] I g l e w i c z , B o r i s · Department of Statistics, Temple University, Philadelphia, PA 19122 - [email protected]

OPENING MATERIAL

51

I z e n m a n , A l a n J . · Temple University, Speakman Hall (006-00), Philadelphia, PA 19122 alanuastro.ocis.temple.edu J o n e s , LyIe V . · University of North Carolina, Chapel Hill, NC 27599-3270 lvjonesQemail.unc.edu K a f a d a r , K a r e n · Department of Mathematics, University of Colorado, Denver, Colorado 80217-3354 - [email protected] K e t t e n r i n g , J o n R . · Bellcore, 445 South Street, Morristown, NJ 079606433 - [email protected] K r y s t i n i k , K a t h y B e l l · 4 North Drive, Westerly, RI 02891 K u r t z , T h o m a s E . L· M r s . · Mathematics Department, Dartmouth College, P.O. Box 962, Hanover, NH 03755 [email protected] L e c h n e r , J a m e s A . · National Instit. Stand. & Tech. (NIST), A337 Administration Building, Gaithersburg, MD 20899 - [email protected] L e e , J a m e s S . · Bristol-Myers Squibb Company, 1 Squibb Drive, P. O. Box 191, New Brunswick, NJ 08903-0191 [email protected] L e w i s , C h a r l e s & M r s . · Educational Testing Services, Rosedale Road, Princeton, NJ 08541 - [email protected] L i e b e r m a n , S i l v i · Department of Statistics, Speakman Hall, Temple University, Philadelphia, PA 19122 L i n , P a u l K u a n g - H s i e n · 5159 Provincial Drive, Bloomfield Hills, MI 48302-2529 M a l l o w s , C o l i n L. · AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ 07974-2008 - [email protected] M e e k e r , Jeff B . · Bristol-Myers Squibb Company, P. O. Box 4000 J33-02, Princeton, NJ 08543 [email protected] M e i e r , P a u l & M r s . · Department of Statistics, Columbia University, New York, NY 10027 - [email protected] M o o r e , D i r k F . · Temple University, Speakman Hall 006-00, Philadelphia, PA 19122 - [email protected] M o r g e n t h a l e r , S t e p h a n & M r s . · Department de Mathematiques, EPFL, MA-Ecublens, 1015 Lausanne, Switzerland- [email protected] M o s e s , L i n c o l n E . · Stanford University, Department of Statistics, Sequoia Hall, Stanford, CA 94305 [email protected] M o s t e l l e r , F r e d e r i c k · Department of Statistics, Harvard University, Science Center, One Oxford Street, Cambridge, MA 02138 M u r p h y , R . B r a d f o r d · 436 Little Silver Point Road, Little Silver, NJ 07739 N g u y e n , H a H . & M r s . · Merck Research Laboratories, P. O. Box 2000WBD-40C, Rahway, NJ 07065 - [email protected] O p p e n h e i m e r , L e o n a r d · Merck Company, P. O. Box 2000 - RY33-408, Rahway, NJ 07065 - [email protected] O r t e g a , W i l f r i d o · Department of Statistics, Speakman Hall, Temple University, Philadelphia, PA 19122

52

JOHN WILDER TUKEY

P i n k h a m , R o g e r S. · Pure L· Applied Mathematics, Stevens Institute of Technology, Hoboken, NJ 07030 - [email protected] S e h e u l t , A l l a n · University of Durham, Department of Mathematical Sciences, Durham, DH13L3 England a.h.seheultQdurham.ac.uk S i l v e r b e r g , A r t h u r R . · American Cyanamid, P. O. Box 400, Princeton, NJ 08543-0400 - [email protected] S o o n g , C h i - W e n · Bristol-Myers Squibb/J13-06, P. O. Box 4000, Princeton, NJ 08543-4000 [email protected] S o p e r , K e i t h A . · Merck Research Laboratories, WP44L-214A, West Point, PA 19486 - [email protected] S p i t z e r , L y m a n & M r s . · Princeton University, 116 Peyton Hall, Princeton, NJ 08544 S t a r s h a k , A l b e r t J . & M r s . · 4852 Woodridge Ct. S., Minnetonka, MN 55345 S t e i n b e r g , L i n d a · c/o Howard Wainer, Educational Testing Service, Mail Stop T-15, Princeton, NJ 08541 l a t u m , L a w r e n c e · Taylor Lane Farm, Riverton, NJ 08077 lgtbb Qnewton. baruch. cuny. edu T h a y e r , D o r o t h y · Educational Testing Service, Mail Stop T-15, Princeton, NJ 08541 - [email protected] T h o m a , M a t h i s · Ciba-Geigy Corporation, 556 Morris Avenue, Summit, NJ 07901 [email protected] T h o m s o n , D a v i d J . · AT&T Bell Labs, 600 Mountain Avenue 2C-360, Murray Hill, NJ 07974-2070 - [email protected] T i c k , L e o J . · New York University Medical Center, 340 East 64th Street, New York, NY 10021 - [email protected] T s a i , K a o - T a i · Schering-Plough Research Institute, 2015 Galloping Hill Road K14-2 2125, Kenilworth, NJ 07033-0530 [email protected] T u k e y , J o h n W . · Princeton University, Department of Mathematics, Princeton, NJ 08544-1000 - [email protected] T y l e r , D a v i d · Rutgers University, Department of Statistics, Hill Center, New Brunswick, NJ 08903 - [email protected] V e l a z c o , S u s a n n a · c/o Orlando Arencibia, Department of Applied Mathematics, State University of New York, Stony Brook, NY 11794 V e l l e m a n , P a u l · Department of Social Statistics, Cornell University, 358 Ives Hall, Ithaca, NY 14853-3901 - [email protected] W a i n e r , H o w a r d · Educational Testing Service, Mail Stop T-15, Princeton, NJ 08541-0001 - [email protected] W a l l a c e , D a v i d L. · University of Chicago, 5734 S. University Avenue, Chicago, IL 60637-1546 - [email protected]

OPENING MATERIAL

53

W e l s c h , R o y E . · Massachusetts Institute of Technology, 50 Memorial Drive, Cambridge, MA 02139-4307 - [email protected] W i l k s , A l l a n R . · AT&T Bell Laboratories, 600 Mountain Avenue 2C-283, Murray Hill, NJ 07974 - [email protected] Z h a n g , D o n g h u i · Merck L· Company, Inc., P. O. Box 2000 - RY70-38, Rahway, NJ 07065-0900 - [email protected] Z w i c k , R e b e c c a · Educational Testing Service, Mail Stop T-15, Princeton, NJ 08541 - [email protected]

John W. Tukey teaching at Princeton University during the early years. (Photograph by Elizabeth Menzies)

I John and Elizabeth Tukey at the 1982 IEEE award ceremony in Boston, MA.

E=S-

John W. Tukey and Elizabeth L. Rapp on the day of their wedding, July 19, 1950. (Photograph by Elizabeth Menzies)

Receiving the National Medal o f Science from President Nixon in 1973.

John Tukey with other members of the President's Science Advisory Committee at a meeting with President Eisenhower. (Newport, RI, 1960)

I-

BELLLABSiKWS

Receiving a present from Arno Penzias, the vice-president of Research at Bell Labs, June 1985.

John Tukey and Bill Baker the night of the dinner at Prospect House, Princeton University, celebrating John's 80th birthday, June 19, 1995. (Photograph by Gus Bittrich)

With Luisa Fernholz and Stephan Morgenthaler; David Brillinger on the far right. Prospect House, Princeton University, June 19, 1995. (Photograph by Ha Nguyen)

With some of the participants o f the conference to honor his 80th birthday. Princeton University, June 19, 1995.

SCIENTIFIC PAPERS

ERRORS-IN-VARIABLES REGRESSION ESTIMATORS THAT HAVE HIGH BREAKDOWN AND HIGH GAUSSIAN EFFICIENCY Dhammika Amaratunga*

Abstract. Errors-in-variables regression (EVR) is a procedure used to study the effect of an explanatory variable on a response variable when the observed values of the former have an error component. Like other ordinary least squares (OLS) procedures, EVR, when fitted via OLS, is highly nonrobust. In this paper, we discuss an alternative fitting procedure that has high breakdown, is resistant to outliers in either variable, and exhibits high finite sample efficiency at the nominal model. Key Words: Errors-in-variables regression, robust regression 1.

Introduction

For linear regression, it has been argued that a one-step M-estimator (Beaton and Tukey, 1974), whose single M step is taken from a high breakdown estimator, inherits the high breakdown property, while producing high asymptotic efficiency at the nominal Gaussian situation (Rousseeuw, 1984, 1994, Rousseeuw and Leroy, 1987, Jureckova and Portnoy, 1989), although whether the high efficiency carries over to finite samples, particularly when the data contain leverage points, has been debated (Morgenthaler, 1989, 1991, Stefanski, 1991, Coakley at al., 1994, Rousseeuw, 1994). The M step can be accomplished either as a weighted least squares (WLS) estimator (in which case, the procedure is referred to as w-estimation) or as a Newton-Raphson step. In this paper, we implement w-estimation for errors-in-variables regression (EVR) by beginning with a trimmed least squares (TLS) estimator, where the amount of trimming, a, controls the degree of resistance, and "The R W Johnson Pharmaceutical Research Institute, Raritan, NJ 08869, USA.

57

58

DHAMMIKA AMARATUNGA

then applying one step of a redescending M-estimator, whose tuning constant, r, controls the weighting scheme and thereby the efficiency of the procedure. Brown (1982) also presented a w-estimation approach for EVR. Maronna, Bustos and Yohai (1979), Carroll and Gallo (1982), Ketellapper and Ronner (1984), Rousseeuw (1987), and Zamar (1989) describe other robust approaches forfor EVR. 2. to-Estimation Errors-in-Variables Regression In EVR, the data are modelled as:

with the errors and are all mutually independent and the error distributions, and are symmetric about zero. The case where are nonstochastic "true" underlying values, is the "functional" form of the model; the case where it is additionally assumed that where are mutually independent and independent of and is the "structural" form of the model. At the "nominal" model, , and, if applicable, In order that the model be identifiable, we make the "classical" assumption that the ratio is known; for simplicity, a unit ratio is assumed with no loss of generality. Madansky (1959) and, more recently, Fuller (1987) provide thorough reviews and extensive bibliographies related to these models. At a "contaminated" model, several observations are assumed to have been generated from a distribution other than the nominal distribution. An observation may be regarded contaminated because of an unusual uz and/or We model this situation by taking For the moment we shall assume that there are no outliers among the sl. The contaminated distributions are Slash Contaminated Gaussian (SCG) distributions (see Gleason, 1993) whose parameter characterizes the extent of contamination; the expected proportion of contaminated observations is thus The orthogonal residual from to the straight line defined by the coefficients is The OLS fit for EVR is the line that minimizes, in terms of a and b, the sum, of squared orthogonal distances: this yields and

where are the elements of the sample covariance matrix

E R R O R S - I N - V A R I A B L E S REGRESSION ESTIMATORS

59

M of The same estimate can be derived via the method of moments and principal components analysis. We now describe our implementation of w-estimation. It comprises two steps, the initial step and the w step. In the initial step, a high breakdown TLS estimate is calculated. In the w step, one M-estimation step, realized as a WLS estimate, is performed. We assume throughout that there are no tied :r-values. The initial step: First calculate the OLS estimate and call it Next, determine the TLS estimator as follows (Rousseeuw, 1984, introduced TLS for linear regression and our algorithm follows suggestions by him and Stromberg, 1 9 9 3 ) . The TLS criterion to be minimized with respect to a and b is where Since no closed form solution is available and since this estimate will be improved upon at the second step, an approximate solution should suffice. The approximate solution is obtained as follows. In the sequence of steps that follow, stores the "current" estimate. Take any pair of observations and let the straight line joining them haveintercept and slope If then replace with otherwise make no change. Repeat this for all pairs of observations or a large random sample of all such pairs. Once this is done, select the observations with the smallest orthogonal residuals and obtain the OLS estimate for this subset of observations. If then replace with otherwise make no change. The resulting which we shall now call , defines the TLS fit. The w step: Let step. Take sure of residual scale. If weight,

be the orthogonal residuals after the initial as a resistant meadetermine weights based on Tukey's bi-

and minimize with respectto a and b. To do this, first minimize fixing b at bt yielding and then minimize fixing a at yielding where ; repeat the process with and iterate a few times to convergence. This yields the final w-estimate If the TLS estimate is used as the final estimate.

60

DHAMMIKA AMARATUNGA

Alternatives to the initial step: This step is computationally rather intensive but the actual computations are quite simple. Alternative estimators for b that have reasonable resistance and therefore could be used instead are: 1. Siegel's (1982) repeated median estimator: bRM =median l (median J ^((2/i - V1)Z(X, - X1)))This estimator has finite sample breakdown point 0.50 and is also computationally intensive although less so than the TLS estimator. 2. The rotated three group median (RTGM) estimator (Amaratunga, Cabrera and Nguyen, 1995) is an extension of Tukey's three group median line (see Hoaglin, Mosteller and Tukey, 1983, for a description) that is calculated as follows. First the data are split into three groups AL = {(xj,yj) : X1 < x{n/3)}, AM = { ( ¾ , ¾ ) : X(n/3) < X] < Χ(2η/3)}. A R = { ( ¾ , ¾ ) : X1 > X(2n/3)}· Then the axes are rotated by the angle to the horizontal of the line joining the medians of groups AL and AR. The three groups AL, AM and AR are redefined according to the new axes and called AL, A*M and A*R. The RTGM estimator of b, 6RTGM> is the slope, with respect to the original axes, of the line joining the medians of groups A*L and A*R. The process can be one-step or iterated. The finite sample breakdown point of 6RTGM is approximately 1/6, which is less than what can be obtained from the other methods presented above but it does have the advantage of requiring much less computation. Alternatives to the w step: The weight function presented above is the biweight. Any other robust redescending weighting function could be used instead (see e.g., Hoaglin, Mosteller and Tukey, 1983, for a list). 3.

Properties of the w-Estimator for Slope

The following discussion refers only to estimation of b. The OLS estimate of b, bo, has high nominal efficiency but has zero resistance since it can be seriously affected by a single aberrant data point. The w-estimate of b, bw, fares better. Resistance to outliers: The initial step of the w-estimation process has breakdown a. Since a redescending weight function is used at the w step, observations that have sufficiently large residuals at the initial step will be assigned zero weight. Thus the totality of the process has finite sample

ERRORS-IN-VARIABLES REGRESSION ESTIMATORS

61

breakdown point a. A more or less identical argument shows the exact fit point is also a. Equivariance: Both 60 and bw are affine equivariant. Efficiency: The efficiency of bw relative to 60 is governed by the choice of τ. Setting τ large will result in high nominal efficiency but if set too large, 6 w 's efficiency at contaminated models will be impaired. Therefore a compromise has to be reached. Since the w step starts with a high breakdown initial estimate, all values far from the [an] points "selected" by it, will have large residuals, r^o, thus it seems possible to pick a value for τ that is larger than usual. Our choice for our small simulation study of bw is outlined below. In simulation studies of biased estimators, relative efficiency is often de­ fined as the ratio of MSEs, MSERE=MSE(6 0 )/MSE(& U) ), where MSE(6») = E(6» - 6) 2 , since the true value of b is known. The problem with this criterion in robustness studies is that sporadic large estimates that oc­ cur, particularly for the nonresistant OLS estimator at contaminated set­ tings, result in hugely inflated MSEs that render comparisons difficult (see also Coakley et al., 1994). Therefore, at such settings, instead of MSEs, we used MADs (MAD=median absolute deviation from the true value), where MAD(6„)=median(|6* — b\) and MAD relative efficiency is defined as MADRE=MAD(6o)/MAD(6„). Choice of a and r: We set a = 0.50 to achieve the highest possible breakdown, though we realize that particularly with small samples, this may occassionally lead to absurd results (see, e.g., Stefanski, 1991). The value of τ controls the efficiency of the procedure. A suitable choice for τ depends on many factors, including n, a, b, au, σν, and as; even "sui­ table" can be defined in many ways, e.g., via asymptotic nominal efficiency, finite sample polyefficiency (Tukey, 1980, Morgenthaler and Tukey, 1991). As with other w- or M-estimation procedures, the choice of the tuning constant remains somewhat subjective. For our simulations with sample size n, we selected a value, Tn, for τ based on finite sample nominal efficiency as follows: we set a = 0, b = 1, au = συ = 1, as = 3, then chose Tn (via simulation) such that MSERE=0.90 at the nominal model; this gave rio = 11.5, T20 = 7.0, T30 = 6.2. We found that moderate deviations of the values of a, b, ση, συ, as, did not necessitate much of a change in Tn. Simulation settings: We compared, via simulation, the finite sample pro­ perties of bw and 60 m functional error-in-variables regression. To do this,

62

DHAMMIKA AMARATUNGA

2

we placed {s,} at the (i/(n + l))-th quantiles of a N(0,3 ) distribution (for i = l,...,n). Then we set ti = bTSt, so that the true value of (a,b) is (0,6χ); values for br were tan(fc7r/8), fc = 1,2,3. Errors {u%} and {υ,} were randomly generated from a Gaussian or SCG distribution and were added on to the {st} and {i t } to yield { ( ΐ ι , ^ ) , i = 1,...,η}. For the SCG distributions, we let £„ = £„(= £)> with values 0.01, 0.05 and 0.25; the most contamination occurs at ξ = 0.25, when, on average, about 44% of the observations are contaminated. Each setting was run 500 times. Simulation results were summarized as relative efficiencies and five number summaries. A sample of results are reported in Tables 1 and 2.

Setting 1 Reg Reg Reg Reg Reg Lev(L=2) Lev(L=10)

η

10 20 30 20 20 20 20

b ξ -> 1.000 1.000 1.000 0.414 2.414 1.000 1.000

MSERE 0% 0.90 0.90 0.90 0.89 0.91 0.82 0.06

0% 1.01 0.95 0.92 0.99 0.97 0.99 0.72

MADRE 1% 5% 1.01 1.08 1.022 1.25 0.94 1.39 0.99 1.33 1.17 0.98 1.32 1.02 0.80 0.90

25% 1.71 3.44 6.00 2.42 2.87 3.29 1.64

1

Reg = setting with no leverage points, Lev = setting with one leverage point Ls(„). 2 Corresponding MSERE = 285.08. Table 1: Efficiency ofbw

relative to bo-

Protection against gross estimation errors: We wished to assess the ability of bo or bw to protect itself from producing grossly incorrect esti­ mates. If B = (bL,bu) denotes a range of "tolerable" values for estimates of b (we took, loosely, 6L = min(6o), by = max(6o) after the first Gaussian simulation), the percent, %Off, of runs in a second simulation that yielded estimates outside B, was taken as a measure of the probability of a gross estimation error (see Table 2). Leverage: The analog of a leverage point in functional EVR is an outlier in S1. To study the effect of a leverage point, we replaced S(n) by Ls^, for L = 2 and L = 10, and reran several simulations.

63

ERRORS-IN-VARIABLES REGRESSION ESTIMATORS

Data with no leverage point: ξ 0 0 0.01 0.01 0.05 0.05 0.25 0.25

Est b0 bw bo bw bo bw bo bw

Min 0.673 0.681 0.011 0.571 0.005 0.637 0.000 0.353

Qi

0.927 0.920 0.912 0.911 0.891 0.907 0.615 0.888

Med 1.010 1.014 0.994 0.996 0.991 0.987 1.003 1.007

Q3 1.107 1.109 1.088 1.079 1.118 1.086 1.620 1.148

l

Max 1.642 1.660 53.176 1.618 1277.385 1.801 13531.769 5.188

%Off 0.2 0.2 2.2 0.2 14.2 0.4 25.0 5.8

Max 1.097 1.565 2762.564 1.683 235.139 1.816 958.869 2.314

%Off 0.0 0.4 0.8 0.4 1.8 0.6 14.8 5.0

Data with leverage point (L=IO): ξ 0 0 0.01 0.01 0.05 0.05 0.25 0.25

Est bo bw b0 bw bo bw b0 bw

Min 0.900 0.613 0.153 0.551 0.121 0.668 0.000 0.453

Qi

0.982 0.976 0.978 0.974 0.978 0.975 0.947 0.973

Med 1.003 1.005 1.001 0.999 1.001 1.000 1.000 1.002

Qs 1.023 1.031 1.020 1.027 1.025 1.026 1.059 1.037

1

%Off values are based on a different simulation than the one in which the five number summaries were obtained; %OfF = percentage of es­ timates outside the range B = (&£,,&(/)> where bi and bu are set at bi, = min(6 0 ) = 0.673, bu = max(6o) = 1-642, based on the Gaussian setting with no leverage points. Table 2: Five number summary (and %Off) of bo and bw when η = 20, 6=1. S i m u l a t i o n results: Some observations from the simulations: 1. bo, as expected, performed well at the Gaussian setting. 2. bw becomes competitive even with a mere 1% contamination (e.g., when η = 20, b = 1, we found MSERE=285.08, MADRE=1.02, b0 ranges from 0.011 to 53.176, whereas bw ranges more robustly from 0.571 to 1.681). 3. bw outperforms bo whenever there is more than a minimal amount of contamination, one of its clearest benefits being its ability to dras­ tically reduce the probability and size of gross estimation errors by using a bounded t/'-function.

64

DHAMMIKA AMARATUNGA

4. when the data contains an extreme leverage point, the high nominal efficiency of bw vanishes. This, of course, is because we are using a tuning constant based on the design with no leverage point. Increas­ ing Tn appropriately will increase the nominal efficiency although at the cost of efficiency elsewhere. Doing this in practice for a given dataset is difficult since, in a given run, a leverage point cannot be distinguished from an outlier in Uj. 5. overall, ω-estimation appears to be a useful robust alternative to ordinary least squares for errors-in-variables regression. REFERENCES [1] Becker, R.A., Chambers, J.M. and Wilks, A.R. (1988). The New S Language: A Programming Environment for Data Analysis and Graphics, Wadsworth, Belmont, CA. [2] Amaratunga D.J., Cabrera J. and Nguyen H. (1995). Robust errorsin-variables regression, in preparation. [3] Beaton, A. and Tukey, J.W. (1974). "The fitting of power series, mean­ ing polynomials, illustrated on band spectroscopic data", Technometrics, 16, 147-192. [4] Brown, M.L. (1982). "Robust line estimation with errors in both vari­ ables", J. Amer. Stat. Assoc., 77, 71-79. [5] Carroll, R.J. and Gallo, P. (1982). "Some aspects of robustness in functional errors-in-variables regression models", Comm. Stat., A 11, 2573-2585. [6] Coakley, C.W., MiIi, L. and Cheniae, M.G. (1994). "Effect of leverage on the finite sample efficiencies of high breakdown estimators", Stat, and Prob. Lett., 19, 399-408. [7] Fuller, W. A. (1987). Measurement Wiley.

error models, New York, John

[8] Gleason, J. R. (1993). "Understanding elongation: the scale contami­ nated normal family", J. Amer. Stat. Assoc, 88, 327-337. [9] Hoaglin, D.C., Mosteller, F. and Tukey, J. W. (1983). Understanding robust and exploratory data analysis, New York, John Wiley.

ERRORS-IN-VARIABLES REGRESSION ESTIMATORS

65

101 Jureckova, J. and Portnoy, S. (1989). "Asymptotics for one-step Mestimators in regression with application to combining efficiency and high breakdown point", Commun. Statist, A, 16, 2187-2199. Ill Ketellapper, R.H. and Ronner, A.E. (1984). "Are robust estimation methods useful in the structural errors-in-variables model?", Metrika, 31, 33-41. 121 Madansky, A. (1959). "The fitting of straight lines when both variables are subject to error", J. Amer. Stat. Assoc., 54, 173-205. 131 Maronna, R., Bustos, O. and Yohai, V. (1979). "Bias and efficiency robustness of general M-estimators for regression with random carriers". In Smoothing Techniques for Curve Estimation. 141 Morgenthaler, S. (1989). "Comment on Yohai and Zamar", J. Amer. Stat. Assoc, 84, 636. 151 Morgenthaler, S. (1991). "A note on efficient regression estimators with positive breakdown points", Stat, and Prob. Lett, 11, 469-472. 16] Morgenthaler, S. and Tukey, J. W. (1991). Configural polysampling: a route to practical robustness, New York, John Wiley. 171 Rousseeuw, P. J. (1984). "Least median of squares regression", J. Amer. Stat. Assoc, 79, 871-880. 181 Rousseeuw, P. J. (1994). "Unconventional features of positive breakdown estimators", Stat, and Prob. Lett., 19, 417-432. 191 Rousseeuw. P. J. and Leroy, A. M. (1987). Robust regression and outlier detection, New York, John Wiley. 201 Siegel, A. F. (1982). "Robust regression using repeated medians", Biometrika, 69, 242-244. 211 Stefanski, L. A. (1991). "A note on high breakdown estimates", Stat, and Prob. Lett, 11, 353-359. 22] Stromberg, A. J. (1993). "Computation of high breakdown nonlinear regression estimators", J. Amer. Stat. Assoc, 88, 237-244. 23] Tukey, J.W. (1980). STAT 411 course notes. 241 Zamar, R. H. (1989). "Robust estimation in the errors-in-variables model", Biometrika, 76, 149-160.

T H E ANALYTIC JACKKNIFE David F. Andrews*

Abstract. The common jackknife estimate of the variance of a statistic θ (Mosteller and Tukey, 1968) corresponds to the sample variance of a statistic θ which is the sample average of η pseudo values. In this paper we develop operators for the analytic representa­ tion of such jackknife estimates. The resulting analytic expressions are used to calculate the bias, variance and other properties of the estimates and to compare these analytically with others including bootstrap estimates. Procedures for deriving unbiased estimates of parameters are presented. Key Words: Tukey, jackknife, bootstrap, symbolic computation 1.

Introduction

In this paper we develop expressions for jackknife estimates of variance in terms of operators. These operators lead to the analytic expressions for the evaluation of properties of estimates. The methods introduced here have been implemented as algorithms for symbolic calculation. The expressions given here were produced by these algorithms. We consider parameters to be properties of distributions. We consider the case where a sample of η independent, though not necessarily identi­ cally distributed, random variables are available for the estimation of these parameters. Although, in some very special cases, a family of distributions may be indexed by a parameter, we consider the estimation of the param­ eter in the more general situations where the distribution is not restricted to such a parametric family. For example, we may wish to estimate the mean or variance of a distribution without assuming it has a particular parametric form. Consider a parameter, Θ, of a distribution from which η independent, though not necessarily identical, random variables are available for estima"University of Toronto, 100 St. George Street, Toronto, Ontaxio M5S I A l , Canada.

67

68

DAVID F . ANDREWS

tion. If the parameter indexes a family of distributions, the score equation, (1)

where denotes the average of the derivative of the log-likelihood function corresponding to each random variable and where denotes expectation with respect to the distribution F, may be used to define the parameter. The same equation may be used to define moments and parameters associated with M-estimates. The expectation in (1), and hence the parameter, depends on the underlying distribution, F. This dependence will be made explicit by denoting the parameter by If the function is sufficiently regular, where 6a denotes the parameter of the distribution G, may be expanded in a Taylor series about Since expectation is a linear operator, the defining equation for becomes (2) If the expectation in the firstterm of the sum is expressed using where the series may be inverted. Without loss of generality we assume in which case the series for is a sum of terms involving or products of these in the numerator only. It has the form (3)

where and are constants, independent of G. The first few terms of the estimate defined by (1) are given by

(4)

assuming, without loss of generality, that and 1. Parameters that are defined in terms of moments may be similarly expanded. The correlation coefficient

69

T H E ANALYTIC JACKKNIFE

may be expanded about a standard bivariate distribution with means 0, variances 1 and correlation p. The first few terms of the series are

+

E[X)E[Y]- α ψ . ^Ε[γ2]

{

+ 2 E [ X Y ]

pE[X*]E[Y*) 4

-

E

ix2f

E[XY)E[Y*] 2

|

**\ ZpE[Y^f 8

( g )

The distribution F plays the same role as the initial conditions of a root-finding algorithm. It needs only to be close enough to G so that the series expansion converges rapidly. An estimate of θα may be found by evaluating the expression for the distribution for the empirical distribution, G = Gn. We will consider parameters and estimates of this form. Parameter estimates based on es­ timating equations of the form of (1) are often calculated using a form of Newton-Raphson iteration. This form of iteration yields a series expansion of the form of equation (3). (Andrews and Stafford, 1993, Stafford and Andrews, 1993). We will call a parameter computable if a finite number of iterations are adequate for its definition. The variance is an example of such a parameter. Any series of the form (3) may be truncated to define a parameter. This truncation yields computable parameters which may be estimated. (Most systems (e.g. GLIM) for the numerical estimation of such parameters spec­ ify the maximum number of iterations used for the computation.) Thus for computable parameters, the outermost sum in (3) is a finite sum running from j = 1 to j = J say. Note that the finiteness of the representation of the parameter is independent of the distribution for which it is evaluated. 2.

The E and S Operators

The parameters defined above are functionals mapping distributions G into Rk. In this section we define two operators on the space of such functionals. An estimate of a parameter of the form (3) may be represented by the same expansion with every expectation EQ replaced by the sample average Ean • This expression may be considered as a function of the distribution G and hence as a new functional derived from the functional Θ. The estimation process induces a mapping from the space of functionals into itself. This operator is denoted by S and defined by (S * *θ)α = 9Gn

70

DAVID F . ANDREWS

This operator may be simply implemented symbolically by changing every E in (3) into an average represented by S * *E. The process of taking an expectation also produces an expression of the form (3) which may be considered as a functional. Taking the expectation therefore also induces a mapping on the space of functionals into itself, denoted by E. Since the expectation is a linear operator which leaves expectations invariant, it is sufficient to define it for averages and products of averages. Since E[X] = E[X] the operation on averages represented here by S**E is defined by E * *S * *E = E The expectation of a product of averages may be computed by generalizing the identity E[XY]

=

(1 - H^)E[X]E[Y]

=

E[X]E[Y]+

+ U-1E[XY]

0(U-1)

(6)

Note that the generalization is defined by the following algorithm: • convert each average to a sum • convert each product of sums to sums over disjoint indices • take the expectation of the now independent components as a product of expectations • convert the sums of expectations over disjoint indices into products of expectations of standard sums • convert sums to averages. The resulting expression will have the same form as equation (3). Examination of an arbitrary case confirms that E[XY...]

=

E[X]E[Y]...

+ Op(H-1)

(7)

In this way E is defined on all functionals considered. Bootstrap estimates of an expression involving expectations of a distribution G, are obtained by evaluating the expectations for the empirical distribution Gn. This is often done by Monte Carlo sampling from Gn. This however is just a way of approximating the expression with G replaced by Gn. This latter expression may be evaluated exactly since it involves only

T H E ANALYTIC J A C K K N I F E

71

averages over the empirical distribution. The bootstrap estimate of any function of form (3) may be computed analytically by applying the opera­ tor S. The resulting analytic expression may be evaluated numerically very quickly since it involves only sums. The expectation of the estimate of θ is given by E * *S * *θ. The bias, {Βφ)θ associated with using the estimate of θ to estimate φ is given by Βφ * *θ = E * *S * *θ - φ

(8)

defining the operator Βφ = E * *S - φ. The variance of the estimate of a parameter Θ, is Ε[θ2} — (Ε[θ])2. This calculation may be considered as an operation on the functional θ to pro­ duce a new functional. It induces a new operator V defined by V = E**(S2)-(E**S)2 For example this operator applied to a single expectation yields V**E = n-1(E**(I)2-(E)2) Exact expressions for the variance of any parameter of the form (3) may be computed by applying V. For example, the operator applied to the correlation parameter (5) yields a rather lengthy expression. If this is evaluated for the Gaussian distribution, it begins n _ 1 ( l — p 2 ) 2 + 0(n~2). The bias of the estimate of variance is expressed in terms of operators as Bv**e * *V * *θ. For example, the bias of the estimate of the variance of the sample average is Βν**β=η-2(Ε**(Ι)2-(Ε)2) 3.

Jackknife Estimates of Variance

The parameters, Θ, described above may be represented as series involving powers of expectations. The parameter estimates, θ are the same series with the expectations replaced by averages. Expressions for the pseudo values corresponding to these estimates may be calculated using the definition pt = ηθ - (η - 1)0;

(9)

where θ~ denotes the estimate based on the sample with the ith observa­ tion removed. Since the estimate is expressed in terms of averages, the calculation of pseudo values is particularly simple.

72

DAVID F . ANDREWS

We begin by considering pseudo values as estimates of parameters of the form (3) and define an operator P such that the sample average of estimates of P * *θ is the sample average of pt. Since the above operations are linear, it is sufficient to define P on products of expectations. Accordingly we define P * *S[X] = X In this case, the sample average of P * *S[X] is just X. The operation of P on products of expectations is simply computed using P**(SS...)

=

n(EE...)

- (n - 1 ) ( J[E] J[E]...)

where

J[E] = - ^ - S - - J - I η - 1

η- 1

In this way P is defined for all functionals considered. If the expectation of the pseudo values is calculated and the sample value of the expectation is taken, this expectation becomes the average of the pseudo values. Applying this to the variance of an average yields _ _, _ „rv1 S * *E[X2] - (S * *E[X])2 s2x ΛΓ L J/ S * * E * * P * *V * *E[X] = —-— = -£η - 1 η the unbiased estimate of the variance of X. The jackknife estimate of the variance of an estimate of a parameter is typically calculated from the sample standard deviation of the pseudo values: s2/n. This estimate may be expressed in terms of operators as S applied to the jackknife variance J V defined by J V = — ί — E * *(P - E * * P ) η — 1

2

The properties of the estimate may be calculated by noting that the terms of the expansion of its estimate (and hence any powers of the es­ timate), involve products of averages Ean. For example, the bias of the jackknife estimate of variance may be expressed in terms of operators: B v » , e * * J V * *θ

(10)

If θ is a mean, the jackknife estimate of its variance is unbiased unlike the bootstrap estimate of variance which is biased by a factor of (1 - 1/n). This improvement in bias holds for more complicated parameters. For example, if θ is the product of two means: S[X]S[F], the operator (10) yields 2

2

2

B v „ « * * J V * *θ = n~ (E[X ]E[Y ]

2

+ E[XY] )

+

3

0(n~ )

73

T H E ANALYTIC JACKKNIFE

if both X and Y have expectation 0. The bias of the bootstrap estimate of variance of this parameter, obtained by applying the operator Β γ . , ο * *V is twice this expression. Since many parameters may be expressed in the form (3), with most of the variance coming from the first terms, the above calculations suggest that the jackknife estimate of variance will be less biased than the bootstrap estimate. The bias of the bootstrap estimate of the variance of the correlation coefficient (5) was calculated. The rather long series was evaluated for the Gaussian distribution to yield B v „ p * *V * *p = n " 2 ( - l + 7p 2 - l i p 4 + 5p 6 ) The bias of the jackknife estimate of the variance of the correlation coeffi­ cient was calculated and applied to the Gaussian distribution to yield 7 B J V * . P * *V * *p = n _ 2 ( 2

+ -p2

5 - 8ρ 4 +

-p6)

The two estimates of variance have biases of the same order. The bootstrap estimate has smaller bias when ρ = 0. 4.

U n b i a s e d E s t i m a t e s of Variance

Jackknife and bootstrap estimates of variance are not, in general, unbiased. It is however easy to compute unbiased estimates of variance for estimates of parameters of the form (3). One way to do this is to subtract from a statistic an estimate of its bias. The resulting statistic has the form of the estimate of an adjusted parameter. Let U^ be an unbiasing operator denned to produce a new parameter whose estimate is less biased for a parameter φ: \]Φ = 1-ΒΦ

(11)

The operator calculus shows that the bias of the estimates of the new parameter are obtained by applying B^ * *ΌΦ = (E * *S - I) * *ΒΦ

(12)

The new bias is obtained by applying E * *S - I to the original bias. But from (6), the operator E * *S — I is a contraction, it is of order n _ 1 . It reduces the bias by one order of magnitude. Repeated iteration of U^ will produce an estimate with arbitrarily small bias.

74

DAVID

F.

ANDREWS

Closed form expressions for unbiased estimates may be calculated by noting that the variance of a parameter of form (3) is also of that form. An unbiased estimate of this parameter may be found by computing expressions yielding unbiased estimates of each component. But the bias of the estimate of a product of means is a linear combination of products of means corresponding to the full partition of the original product as in (6). The linear operation may be expressed in matrix terms. Application of the inverse matrix yields the unbiased estimate. This process yields unbiased estimates of moments as in Fisher (1928). For further discussion see Hall (1992). 4.1

Example, estimating the variance

Let () F be the variance of the random variable X. It is defined by

The estimate is

The bias of this estimate is

The bootstrap estimate of this bias is

The bootstrap bias corrected statistic is the estimate of (13)

The bias of this estimator is BOF

* *U OF * *()F = n-2(EF[X2]- EF[X]2)

Consider the space spanned by the basis vectors B = {EP(X2] , EF[X]2}. The operator E * *8 is represented by the matrix E**8=

I n~1

0 1_ n- 1

I

with inverse (E * *8)-1 =

I -(n _1)-1 1

0

n(n - 1)-1

75

THE ANALYTIC JACKKNIFE

leading directly to

with sample estimate given by the usual, unbiased, estimate of variance, 2 8 .

Note that the steps outlined above require no information about the underlying distribution.

4.2

Example, M-estimation

Consider the M-estimate (4). (We could easily compute more terms but printing the lengthier result would not be useful.) The estimate of ()o is obtained by replacing the distribution G in the above with the empirical distribution G n . The bias of this estimate is

E F [¢" (()F)) Eo [¢2 (()F)) + _Eo.::...![,,-¢(:. . .()F::. . .o).. :. . ¢(-=-'()-=-F.:. .!.)) 2n n EF[¢"(()F ))EO[¢(()F ))2 Eo [¢(()F ))EO[¢'(OF )] + 2n n The basis, B, for the space of expectations is {Eo [¢(()F )¢(()F ))EO[¢(()F )¢' (()F)), Eo [¢(()F )), Eo[¢' (()F)), Eo [¢(()F ))2, Eo [¢(()F )]Eo[¢' (()F))} for which the parameter is represented by the coefficient vector

a = 0 0 -2 0 EFW'(()F)) 1 " " 2 ' The operator E * *S is represented by the matrix M given by:

M=

1 0 0 0 1 n 0

0 1 0 0 0 1 n

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1_1 n 0

0 0 0 0 0 1_1

(14)

n

leading on inversion to the new parameter

Bo

M-1a

-+

-~EO[¢(()F)]- (EF[¢II(OF)])Eo[¢(OF)]2

+

Eo [¢(()F )]Eo[¢' (()F))

n-1 The expectation of this parameter is

2 ()o -

for all G.

76

5.

DAVID F .

ANDREWS

Acknowledgments

This work was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada. I am pleased to acknowledge the helpful comments of many friends especially, A. Feuerverger, N. Reid and R. Tibshirani, given in the development of this paper. J. Tukey, in many ways has had the greatest influence on this work. For this I am especially grateful. REFERENCES [1] Andrews, D.F. and Stafford, J.E. (1993). "Tools for the symbolic computation of asymptotic expansions", J. Royal Statist. Soc, B55, 613622. [2] Stafford, J.E. and Andrews, D.F. (1993). "A symbolic algorithm for studying adjustments to the profile likelihood", Biometrika, 80, 71530. [3] Efron, B. and Tibshirani, R. (1993). An Introduction to the Bootstrap, Chapman Hall, New York. [4] Fisher, R.A. (1928). "Moments and product moments of sampling distributions", Proc. London Math. Soc, 30, 199-238. [5] Hall, P. (1992). The Bootstrap and Edgeworth Expansion, SpringerVerlag, New York. [6] Mosteller, F. and Tukey, J.W. (1968). "Data Analysis, Including Statistics", Handbook of Social Psychology, 2nd. Edition, Vol. 2, Chapter 10, 80-203.

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS David R. Brillinger*

and Alessandro E. P. ViIIa+

The stronger the qualitative understanding the data analyst can get of the subject-matter field from which his data comes, the better - just so long as he does not take it too seriously. Mallows and Tukey (1982) Abstract. In this work spike trains of firing times of neurons recorded from various locations in the cat's auditory thalamus are studied. A goal is making inferences concerning connections amongst different regions of the thalamus in both the presence and the absence of a stimulus. Both second-order moment (frequency domain) and full likelihood analyses (a threshold crossing model), are carried through. 1.

Introduction

The sequence of spikes of a neuron, referred to as a "spike train", may carry important information processed by the brain and thus may underlie cogni­ tive functions and sensory perception (Abeles, 1991). The data studied are recorded stretches of point processes corresponding to the firing times of neurons measured in the cat's auditory thalamus (Webster and al., 1992). This set of nuclei is often viewed as the penultimate in an ascending hierar­ chy of processing stages of the auditory sensation that begins at the level of the inner ear. The thalamic nuclei belonging to the cat auditory pathway are the medial geniculate body (MGB), the lateral part of the thalamic posterior complex and the reticular nucleus of the thalamus (R or RE). The RE receive and send projections to the other thalamic subdivisions through an array of convergent and divergent connections (Villa, 1990). Figure 1 provides a block diagram indicating some plausible connections amongst the regions of the auditory system of concern in this work. 'Statistics Department, University of California, Berkeley, CA, USA. ι Institute of Physiology, University of Lausanne, Switzerland.

77

78

DAVID R. BRILLINGER AND ALESSANDRO E. P . VILLA

Pars magnocellulans (M)

Pars lateralis (PL)

Input

F i g u r e 1: A block diagram of the auditory regions of the cat's brain.

A basic goal of the paper is to obtain some understanding of how auditory regions of the brain interact. More specifically, results are presented about the association of pars magnocellularis (M) of the medial geniculate body and the reticular nucleus of the thalamus. Special interest has been raised by these thalamic subdivisions because M, also known as the medial division of MGB, is characterized by a unique pattern of projection to all the auditory cortical fields (Morel and Imig, 1987 and Rouiller and al., 1989), and R is playing a key-role in the adaptive filtering of the auditory input to the cerebral cortex (Villa and al., 1991). The data were collected during two recording conditions: a first in which the neurons were firing spontaneously, a second in which white noise sound bursts were applied regularly as a stimulus. The simultaneous recording of the electrophysiological activity of neurons in R and M was replicated, at times separated by 2 to 8 hours, between successive experiments. One

ASSESSING CONNECTIONS IN N E T W O R K S OF BIOLOGICAL NEURONS

79

interesting problem involved in this study is how to combine the results of different recording sessions. Two types of analyses are presented in the paper. The first is based on second-moment statistics in the frequency domain, while the second is based on a conceptual (threshold crossing) model for neuron firing. Previ­ ous work on the problem of interacting neurons includes: Borisyuk, 1985, Gerstein and al. 1985. This paper continues the work of Brillinger and Villa (1994) and Brillinger (1996). 2.

Experimental Background

The experiment was conducted in a nitrous-oxide anesthetized young adult cat in compliance with Swiss guidelines for the care and use of labora­ tory animals and after receiving governmental veterinary approval. The experimental procedure and ordinary time series analysis of this dataset are described in Villa (1988). Briefly, the anesthesia was induced by an intra-peritoneal injection of sodium pentobarbital (Nembutal, 40 mg/kg body weight). The cat was mounted in a stereotaxic instrument and a small hole was trepanated on the skull, at the level of the auditory thala­ mus. The anesthesia during the recording sessions was maintained by an artifical ventilation with a mixture of 80% N2O and 20% O2. The reflex state, pupil size and blood pressure were monitored in order to detect any sign of discomfort of the cat. Extracellular single unit recordings were made with glass-coated platinumplated tungsten microelectrodes having an impedance in the range 0.5-2 ΜΩ measured at a frequency of 1 kHz. Up to six microelectrodes could be advanced independently. The dataset analyzed here was collected from one electrode inserted in R (2 spike trains) and two electrodes inserted in M (3 spike trains). Simultaneous recording of spike trains from the same microelectrode was achieved by using an analog template matching spike sorter according to a technique described elsewhere (Ivarsson and al., 1988 and Villa, 1990). The firing times were measured by a microcomputer with an accuracy of 1 ms and stored digitally for off-line analysis. The activity of a group of units was recorded during 40 to 60 minutes. Four recording sessions, performed at intervals of 2 to 8 hours, involving three units in M and two units in R are used to assess the connections between these thalamic subdivisions. Several stimuli were applied in order to characterize some typical re­ sponse properties of auditory units, but the results reported in this paper were collected in two recording conditions: during stimulation by a white

80

DAVID R. BRILLINGER AND ALESSANDRO E. P . VILLA

" • ι



-

·





'•.'••'

••••'•'•".•''

'

!;• •

'"'•"•

.•"

• ••



--''

^i$$i-r:-^:%i';?H-\^-.

/

.•-•'••",V--.

.

'.

M^W^-Ά lag (msec)

F i g u r e 2: Raster plots of experiment w21q04 with stimulation. The gray bar indicates the presence of the stimulus. A pair of units 1, 2 and unit 3 were recorded in M subdivision of MGB from two microelectrodes, respec­ tively, and a pair of units 4, 5 was recorded in R.

noise burst (at an intensity of 72 dB sound pressure level delivered to both ears simultaneously) at a frequency of 1 stimulus/second (i.e. lasting 200 msec followed by 800 msec of silence) and during absence of external stim­ ulation, to be referred to as spontaneous activity. The spike trains were collected during 5 to 8 minutes of each recording condition. Figure 2 provides raster plots of the data for five neurons of one of the recording sessions involving stimulation. Each dot represents the oc­ currence of a spike. Here in an individual raster plot, spike times for 300 successive repetitions of the stimulus presentation are stacked above each other, aligned on the stimulus onset. For unit (neuron) 1, one sees a solid transient response to the stimulation a brief latency after the beginning of stimulus application. For unit 4, one can note a transient increase of activ­ ity after a longer latency than observed in unit 1 followed by an increase of activity lasting up to the ending of the noise burst. Upon completion of the experiment, an electrolytic lesion was performed at a known depth for each microelectrode track, by passing a current of about 8 μΑ during 10 s. At the end of the recording session the animal received a lethal dose of Nembutal, and the brain was prepared for standard histological procedures. These allowed the physical locations of the neurons recorded to be obtained.

ASSESSING CONNECTIONS IN N E T W O R K S O F BIOLOGICAL NEURONS

3.

81

Statistical B a c k g r o u n d

Two distinct types of analyses are presented. The first is a second-order moment analysis working with multivariate statistics computed in the fre­ quency domain. The second is a likelihood analysis based on a conceptual model for the firing of a neuron. In the second case, the parameters are estimated by the method maximum likelihood. 3.1

Second moment analysis based o n F T s

The points of a pair of contemporaneous of point processes, M and N, may be denoted am, m = 0, ± 1 , ... and Tn, η = 0, ± 1 , ... respectively. The data may be thought of as a segment of a realization of a bivariate stationary point process. The empirical Fourier transform of the σ points is

dTM{\) = Σ > - ' λ σ ~

(1)

m

where T denotes the length of the time period of observation and for λ real-valued. Under conditions of stationarity and mixing such Fourier trans­ forms often satisfy central limit theorems. The coherency of the M and N processes, at frequency λ, may be defined as RMΝ (λ) = Yirn^ corr j £

e~lX"-,

(. m

£

1.

e ' ^

η

(2)

)

Its modulus-squared the coherence, \RMNW\2, is a measure of the linear time invariant dependence of the two processes at frequency A, see eg. Brillinger, 1975a. One way to see the reasonableness of these definitions is to consider the case of ordinary time series whose components take on the values 0 — 1 . With fine enough time interval expression (1) corresponds to the usual Fourier transform of a stretch of such 0 — 1 values. A measure of conditional dependence of processes M and N, given some other processes is provided by the partial coherency R-MN\rest — (RMN — RM\restRN\rest)

Iλ/(1



2

\RlA\rest | ) ( 1



2

\RN\rest | )

(3)

having supressed the dependence on λ. Here RMΝ is given by (2), while RM\restW

= lim corr I V e * - ,

BT(X) 1

82

DAVID R. BRILLINGER AND ALESSANDRO E. P .

VILLA

T

with B (X) denoting the best (minimum mse linear) predictor of (1), ex­ cluding the process N. Estimates of the coherence and partial coherence may be based directly on empirical Fourier transforms of the point pro­ cesses involved. For example, one could take MMN{*)



(4)

ι 7

V E, K(WEJ^(V)I

2

with the sums over I = 1, ...,n empirical Fourier transforms of the form (1) based on separate time stretches. For another estimation method see Brillinger (1975a) and Brillinger and al. (1976). Examples of partial coherence computations for networks of three neu­ rons may be found in Brillinger and al. (1976). Other neurophysiological examples may be found in Rosenberg and al. (1989). References on partial coherence in the time series case include Brillinger (1975), Gersch (1972), Tick (1963). For the data sets of interest, the neurons fall into particular regions of the brain and it is desired to have measures of the strengths of connection amongst pairs of these regions. This necessitates a form of multivariate analysis. The particular regions studied here are M and R, as sketched in Figure 1. The fact that the empirical Fourier transforms are approximately Gaus­ sian, suggests employing some traditional procedure of multivariate (Gaus­ sian) analysis. Let R refer to the matrix of sample coherencies computed for all available neurons (either in region M or region R.) (Again depen­ dence on λ is being suppressed.) Let R M Μ and R/JR refer to submatrices of R corresponding to the M units and R units, respectively. The |.|, in (5) below, denotes the determinant of the matrix involved with the dependence on λ suppressed for simplicity's sake. A test of independence of the M and N components can be based on the likelihood ratio or deviance statistic -2 η log{|R|/|RMM||iW|}

(5)

approximating its null distribution by a chi-squared with degrees of freedom ^PMPR- In the case of independence, for the population values, |R|

=

|RAiAi||RHfl|·

The PM and ρ Ν denote the numbers of rows in R M M and RRR respectively and η is the number of time segments in an estimate such as (4). (Here PMPN complex parameters have been set to 0 under the null hypothesis

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS

83

of independence of the M and R regions, hence the indicated degrees of freedom.) More accurate approximations to the distribution of (5) are suggested in Wahba (1971) and Krishnaiah and al. (1983). Independent experiments may be combined by adding the statistic (5) over experiments, with a corresponding addition of degrees of freedom. This will be the case for the example in Section 4. When stimulation is present, to assess connections independent of stimulation, one might work with the coherencies having "partialled out" the point process of stimulus application times. To do so one proceeds as in (3), but for example replacing R M M by RjWM ~

R-MsR-SS R S M ·

The values of such a statistic will be presented in Section 4. 3.2

Likelihood analysis of a conceptual model

Brillinger and Segundo, 1979, introduce maximum likelihood fitting of a threshold crossing model for a neuron firing as a function of input. Suppose that firing of the neuron takes place when an internal state variable, U(t) the membrane potential, upcrosses a (random) threshold 0(i). The value U(t) will depend on the inputs received by the cell. It will be assumed that the threshold is reset after firing, which in effect introduces a refractory period. Figure 3 provides a graph of the functions U(t) and 9{t) for one case. The piles along the time-axis indicate input neuron spike firing times. The j-shaped curves correspond to 6(t). Output firings occur at the times when U(t) and 9{t) meet. In the case of this figure, the input is inhibitory as seen by the dipping of U(t) after the arrival of an input spike. Consider a neuron, M, driving a neuron, N. In the fitting of the threshold model it is convenient to replace the point process values by 0 — 1 valued discrete time series values, taking a fine time interval. Define M4 = 1 if spike in the interval (i, t + 1] and Mt — 0 otherwise for t = 0, ± 1 , . . . and some small time interval. There is a similar definition for Nt. If 7t denotes the time since the neuron N last fired, the membrane potential will be approximated by

Ut = y^ mt-uMu for some summation

function mt. It will be further assumed that

0t = d + e 7 t + / 7 t 2 + gjf

+ et

(6)

84

DAVID R. BRILLINGER AND ALESSANDRO E. P . VILLA

Membrane potential and threshold function

0

1

2

3

4

5

time (seconds)

F i g u r e 3: The functions U(t), 6(t) and input spike times. with the et independent standard normals. The cubic form is employed here to be able to produce J-shaped threshold forms and as a form linear in the unknown parameters. Taking Φ for the standard normal cumulative the log likelihood, given the input, is

J2[Nt log ${Ut-et)

+ (1-Nt) log(l -

*(Ut-et))].

t

In the example to be presented parameter estimates will be determined by maximizing this expression. Results of such a fitting, in the case of single input and output spike trains or of spike train output with noise input, may be found in Brillinger and Segundo (1979) and Brillinger (1992). For the experiments of interest a multivariate version of the model is needed. There will be an arbitrary number of neurons, situated in several regions of the brain. Also a stimulus will be present during particular time intervals. Define a stimulus variable by setting St = 1 whilst the stimulus is applied and St = 0 otherwise. Then for the j - th neuron one can consider a model with NJtt = 1 when Tj1

aSt

+ 2_j 2 j ajk,t-uNk,u

> #7,t

(7)

and N3tt = 0 otherwise, t = 0, ± 1 , . . . and 0j-it given by (6) and η3ι is the time elapsed since neuron j last fired. The j — th and k — th neurons may be in the same or different regions of the brain. To assess the hypothesis that some of the a3k,. are identically 0 one can compute the change in the deviance (- twice the log-likelihood), occurring

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS

85

when the hypothesis is incorporated. This quantity may be viewed as a measure of the strength of the connection, see Brillinger, 1996. Examples of this and estimates of the a3k,. axe presented in the next section.

4.

Results

The regions of the brain, for which results are presented in this paper, are M, the pars magnocellularis, and R (or RE), the reticular nucleus of the thalamus. Questions of interest include: Is there association of regions R and M? Are there direct connections of R and M? Is apparent association due to signal driving? How strong are the connections? The results of the analyses are presented next.

4.1

Second-order analysis

Figure 4 provides the values of the mod-square of the statistic (4) in the cases of stimulation (upper panels) and of spontaneous firing (lower panels), and also the corresponding partial coherence "removing" the effects of stimulation as in (3). The dashed line provides the approximate upper 95% null line, based on the approximating chi-squared distribution. The degrees of freedom were summed over 4 cell groups and totalled 48 here. One sees low frequency association in each case. The upper left graph shows strong association around 1.8 Hz and apparent association up to about 15 Hz. Note that no major peaks were observed on either condition at frequencies higher than 25 Hz. The upper right panel of Figure 4 shows the overall association much reduced, when the linear time invariant effects of the stimulus are "removed". There is an intriguing peak in the two top graphs at 7.9 Hz. The bottom two graphs are much the same, as they should be. In a sense the upper right graph is meant to estimate the lower left. (Up to sampling variation this would be precisely so if the relationship was linear.) One would like to say that there is association at very low frequencies independent of the stimulation. Association near 7.9 Hz appears only during the stimulus condition, but it was apparently not linearly locked to the stimulus onset, as suggested by the persistence of the peak at 7.9 Hz in the estimate of the partial 'stimulated' coherence (Figure 4, upper right panel). However, it needs to be noted that, more than 5% of the points of the panel are above the 95% null line. The spike train recordings of the four groups were carried out at different times. The excess of points here may be the consequence of a time trend or some other individual experimental effect.

86

D A V I D R . BRILLINGER AND ALESSANDRO E . P . V I L L A

RM: deviance stimulated

partial stimulated

frequency (cycles/sec)

frequency (cycles/sec)

spontaneous

partial spontaneous

frequency (cycles/sec)

frequency (cycles/sec)

Figure 4: The statistic (4) summed over four recording sessions.

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS

87

Stimulus evoked activity - Change in deviance

M

231.7 (90| reticular nucleus of the thalamus

medial division OfMGB

114.1(90)

316.4(2)

331.5(3)

stimulus

Figure 5: Deviance differences for the likelihood fits. Degrees of freedom in (.).

4.2

Likelihood analysis

Figure 5 displays the results of fitting the model (7) and in particular the changes in deviance when the arrows concerned are removed from the diagram. The experiments are the same as in Section 4.1. The results are combined by adding the deviances. The figures in brackets are the degrees of freedom of a null chi-squared statistic. It is clear that there is a strong association with the stimulus in each case. The direct connection from M to R appears stronger than the reverse, if one takes deviance as a measure of strength of association. In the case of a single recording session it is possible to show the estimated ajk,.- Figure 6 shows the neurons recorded in the first recording session and the estimated functions ajk,. of the model (7). The upper left panel provides the summation function for the influence of the second neuron of R on the first in the presence of the neurons of M. The remaining panels on the left refer to the influence on the first neuron of R of the 3 neurons of M. The right column similarly refers to the second neuron of R. To put it in other words, the left column refers to R l being influenced by R2, Ml, M2, M3 and the right to R2 being influenced by R l , M l , M2, M3. Note that discharges of cell pair Rl and R2 were recorded simultaneously from the same electrode. Units M l and M2 were also recorded from the same, but different than the previous, elec-

88

D A V I D R . BRILLINGER AND ALESSANDRO E . P .

VILLA

Influence of M and R on R

i

lag (msec)

lag (msec)

Figure 6: The left column provides influences on neuron Rl from other neurons of R and those of region M. In the top panel, arrows indicate the considered directions of influence. The right column similarly refers to R2.

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS

89

trode. The dashed lines give approximate ±2 standard error limits. The standard errors are approximate, computed by the usual maximum likelihood formulas. While for the three experiments merged the influence of M on R appeared substantial (deviance of 231.7 with 90 degrees of freedom), none of the summation function estimates (for the first experiment alone) appears strongly significant. Further investigations are being carried out. Perhaps the standard errors are inappropriate. Perhaps there are lurking correlations. A problem is how to combine such ajk,. for several experiments. The difficulty is that different neurons and paths are involved, hence for example different latencies of effect may occur. 5.

Discussion and Summary

As indicated at the outset, the goal was to make inferences concerning connections amongst regions of the auditory thalamus, both in the presence and absence of a stimulus. Here the work has been on the reticular nucleus of the thalamus and pars magnocellularis of the medial geniculate body. Two methods for investigating the wiring diagram of a particular point process system have been presented and illustrated. A second-moment analysis showed the usefulness of the study of frequency bands and provided a global estimation of the strength of the connections between the regions under study. A likelihood approach was based on the basic biology. One interesting feature of this method is the possibility to elaborate detailed inferences on the temporal pattern of the connections. This may represent a fundamental clue for understanding the information processing carried out by these regions in the thalamus. Both methods proved convenient for combining the results of different experiments. Uncertainty measures were central to making inferences. There are a number of difficulties that arise in this work. The data are numerous and of complex structure. A neuron may receive thousands of inputs and data are available for but a few. The sampling of the regions of the brain cannot be expected to be unbiased. An approach needs to be developed that reduces the influence of individual neurons on the statistics computed in case something unusual is taking place for one of them. Nonstationarity and experiment effects are sometimes present. Recent evidence of non-linear deterministic dynamics in spike trains, as indicated by the apparent existence of low-dimensional chaotic attractors (Celletti and Villa, 1996a and Celletti and Villa, 1996b, should also be taken into account for global estimations of cumulated recording sessions. Thus, it appears that

90

DAVID R. BRILLINGER AND ALESSANDRO E. P . VILLA

future work will look for the evolution of the system in time. This also represents a necessary step for applying these methods to neurophysiological data about learning and memory. 6.

Acknowledgments

Some may have been intrigued by the Mallows-Tukey quote at the head of the article. The remark is highlighted because one of us, as a graduate student, remembers JWT making remarks such as: "To consult with a chemist, you have to become a chemist." We thank the referee for helpful comments on the manuscript. The authors would like to acknowledge the support of U.S. grants of the National Science Foundation DMS-9300002 and DMS-9625774 and the Office of Naval Research Grant ONR-N00014-94-1 and of the Swiss National Science Foundation Grant 31-37723.93. REFERENCES [1] Abeles, M. (1991). Corticonics: neural circuits of the cerebral cortex, Cambridge University Press, Cambridge, UK. [2] Borisyuk, G.N. (1985). "A new statistical method for identifying interconnections between neuronal network elements", Biological Cybernetics, 52, 301-306. [3] Brillinger, D.R. (1975). "The identification of point process systems", Annals of Probability 3, 909-929. [4] Brillinger, D.R. (1975). Time Series: Data Analysis and Theory, Holt Rinehart, New York. [5] Brillinger, D.R. (1992). "Nerve cell spike train data analysis: a progression of technique", J. American Statistical Assoc, 87, 260-271. [6] Brillinger, D.R. (1996). "Remarks concerning graphical models for time series and point processes", Revista de Econometria, 16, 1-23. [7] Brillinger, D.R., Bryant, H.L. and Segundo, J.P. (1976). "Identification of synaptic interactions", Biological Cybernetics, 22, 213-228. [8] Brillinger, D.R. and Segundo, J.P. (1979). "Empirical examination of the threshold model of neuron firing", Biological Cybernetics, 35, 213220.

ASSESSING CONNECTIONS IN NETWORKS OF BIOLOGICAL NEURONS

91

[9] Brillinger, D.R. and Villa, A.E.P. (1994). "Examples of the investigation of neural information processing by point process analysis", pp. 111-127 in Advanced Methods of Physiological System Modelling 3, (ed. V.Z. Marmarelis), Plenum, New York. [10] Celletti, A. and Villa, A.E.P. (1996). "Low dimensional chaotic attractors in the rat brain", Biological Cybernetics, 74, 387-394. [11] Celletti, A. and Villa, A.E.P. (1996). "Determination of chaotic attractors in the rat brain", J. of Statistical Physics, 84, 1379-1386. [12] Gersch, W. (1972). "Causality or driving in electrophysiological signal analysis", Mathematical Bioscience, 14, 177-196. [13] Gerstein, G.L., Perkel, D.H. and Dayhoff, J.E. (1985). "Cooperative firing activity in simultaneously recorded populations of neurons: detection and measurement", J. Neuroscience, 5, 881-889. [14] Ivarsson, C , De Ribaupierre, Y. and de Ribaupierre, F. (1988). "Influence of auditory localization cues on neuronal activity in the auditory thalamus of the cat", J. Neurophysiology, 59, 586-606. [15] Krishnaiah, P.R., Lee, J.C. and Chang, T.C. (1983). "Likelihood ratio tests on covariance matrices and mean vectors of complex multivariate normal populations and their applications in time series", pp. 439-476 in Handbook of Statistics 3, (eds. D.R. Brillinger and P.R. Krishnaiah), North-Holland, Amsterdam. [16] Mallows, C M . and Tukey, J.W. (1982). "An overview of the techniques of data analysis, emphasizing it exploratory aspects", pp. 111-172 in Some Recent Advances in Statistics, (Eds. T. de Oliviera et al.), Academic, New York. [17] Morel, A. and Imig, T.J. (1987). "Thalamic projections to fields A, AI, P, VP in cat auditory cortex", Journal of Comparative Neurology, 265, 119-144. [18] Rosenberg, J.R., Amjad, A.M., Breeze, Brillinger, D.R. and Halliday, O.M. (1989). "The Fourier approach to the identification of functional coupling between neuronal spike trains", Progress in Biophysics and Molecular Biology, 53, 1-31. [19] Rouiller, E.M., Rodrigues-Dagaeff, C , Simm, G., de Ribaupierre, Y., Villa A.E.P. and de Ribaupierre, F. (1989). "Functional organization of

92

DAVID R. BRILLINGER AND ALESSANDRO E. P . VILLA

the medial division of the medial geniculate body of the cat: tonotopic organization, spatial distribution of response properties and cortical connections", Hearing Research, 39, 127-142. [20] Tick, L.J. (1963). "Conditional spectra, linear systems and coherency", pp. 197-203 in Time Series Analysis, (ed. M. Rosenblatt). Wiley, New York. [21] Villa, A.E.P. (1988). Influence de Vecorce cerebrate sur I'activite spontanee et evoquee du thalamus auditif du chat, Presses Imprivite, Lausanne. [22] Villa, A.E.P. (1990). "Functional differentiation within the auditory part of the thalamic reticular nucleus of the cat", Brain Research Review, 15, 25-40. [23] Villa, A.E.P., Rouiller, E.M., Simm, G.M., Zurita, P., de Ribaupierre, Y. and de Ribaupierre, F. (1991). "Corticofugal modulation of the information processing in the auditory thalamus of the cat", Experimental Brain Research, 86, 506-517. [24] Wahba, G. (1971). "Some tests of independence for stationary time series", J. Royal Statistical Soc. B, 33, 153-166. [25] Webster, D.B., Popper, A.N. and Fay R.R. (1992). The Mammalian Auditory Pathway, Springer Verlag, New York.

ESTIMATING ABUNDANCES FOR A BREEDING BIRD ATLAS Christopher A. Field*

Abstract. In this paper, we give a description of the process used to obtain reasonable quality abundances for a breeding bird atlas. A feature was that we had a large amount of data available but it was typically of rather low quality. We show that the estimates along with their confidence intervals provide estimates that axe quite broadly consistent with comparable abundance estimates. Key Words: Poisson counts, maximum likelihood, bootstrap, confidence intervals, robustness 1.

Introduction

During the five year period from 1986 to 1990, data were collected for a breeding bird atlas for the three Maritime Provinces of Canada, New Brunswick, Nova Scotia and Prince Edward Island. The purpose of the Atlas (Erskine, 1992) is to determine which birds breed in which parts of the region and to give a rough estimate of the number of breeding pairs for each species. To collect the data, the region was divided into 1682 squares each 10km2 and it was decided to collect data on as many squares as possible. One quarter of the squares were designated as priority squares and all but one of these were atlassed. The missing priority square contained an artillery range on a military base. Each square was assigned to a volunteer birder under the direction of a regional coordinator. The whole operation was overseen by a coordinator and a scientific committee. The birder was given up to five years to thoroughly atlas their square. They had to identify behavior in birds which indicated that breeding was taking place. The usual signs were the presence of a nest or young, an adult carrying food or nesting material and for certain species, protective behavior near the nest site. As well as establishing the presence of breeding activity for a particular species, the birder was asked to estimate the number of breeding 'Department of Mathematics, Statistics & Computing Science, Dalhousie University, Halifax, Nova Scotia, B3H 3J5, Canada.

93

94

CHRISTOPHER A.

Interval Present 0 1 2-10 11-100 101-1000 1001-10000 over 10000

Code X 0 1 2 3 4 5 6

Junco 401 525 4 43 235 233 80 0

FIELD

Mourning Warbler 164 254 5 83 161 57 6 0

Table 1: Abundance Counts for Junco and Mourning Warbler. pairs of that species within the 10 km 2 region. This was done by counting the number of breeding pairs in some small part of the region and then extrapolating to the portion of the square that contained suitable breeding habitat. This process is by nature rather crude, and the atlasser was asked simply to give estimates 0 or 1 or an interval estimate of 2-10, 11-100, 1011000, 1001-10000 or over 10000. Some atlassers did not provide estimates but simply recorded whether or not a particular species bred in their region. In the next section, we describe the basic technique to get abundance estimates including some simple robust modifications to maximum likeli­ hood. Data are included for two species to give an indication of the nature of the data. We then compute the overall abundance estimate and find con­ fidence intervals. A final section compares the results with those obtained in other studies. 2.

Estimation

To begin we present the data for two species: a common species, dark-eyed junco and a moderately uncommon species, mourning warbler. The data are given in Table 1 and are shown spatially in Figure 1 for the mourning warbler. The symbol on the map indicates the category from Table 1. In our analysis we considered separately each of the 114 species con­ sidered suitable for our technique. Very uncommon species or species with very specialized habitats were estimated by more direct techniques. While it is likely that the abundance of one species contains information about the abundance of some other species, it seemed unlikely that the complex relationships could be modelled given the low quality of the data. We view the number of breeding pairs of a species in square j to have a Poisson distribution with parameter Aj. Assuming independence of the regions, the abundance for the region will then be Poisson with parameter Σ " = 1 Xj

ESTIMATING ABUNDANCES FOR A BREEDING B I R D ATLAS

F i g u r e 1: Mourning Warbler

95

Abundances.

where η is the number of atlassed squares. For each atlassed square where the species occurs, we have an observation that the number of breeding pairs observed is either 0 or 1 or lies between 10 f c _ 1 and 10fc for k = 1...5 where the interval is closed at the right. The first step is to obtain an estimate for Σ ? = ι -\?· A second step will be to examine the validity of our assumptions in light of the data and to make modifications to robustify the estimates. To proceed with the estimation, we consider the problem of estimating X-,. While it can be argued that there will be some spatial correlation among the X3 'S, it would be difficult to model this complex relationship. In fact the spatial correlation would vary with each of the 114 species considered and would depend on the habitat demands of the particular species and the availability of the habitat within the square. Our approach has been to estimate Xj based on the observed interval for that square. These estimated values are then aggregated over the region. An alternate approach would be to use information from the nearest neighbour squares and obtain an estimate which is a weighted average of values from the square and its

96

CHRISTOPHER A.

Observation 0 1 2 3 4 5 6

Interval [0,0] [1,1] [2,10] [11,100] [101,1000] [1001,10000] [10001,100000]

FIELD

λ 0 1 4.15 46.91 474.53 4750.74 47512.85

Table 2: Estimates of Xj. neighbours. Since there is an overall aggregation, it seems likely that the estimate obtained in this fashion is not likely to differ much from that obtained by the simpler method we actually used. We now determine the estimates A3. For each square where an abun­ dance estimate is given, we know that the observed abundance Xj satisfies a < Xj < b for some a and b. The maximum likelihood estimate is obtained by choosing Xj to maximize Px[a P2 > P3 denote the 1980 populations of the county's three largest urban places (usually a minor/census civil division, MCD/CCD). In 48 (respectively, 274) counties, only 1 (respectively, 2) MCDs/CCDs could be found, in which case the divisor is 1 (respectively, 2). A histogram of the 3068 values for the continental U.S. is approximately symmetric, and

224

KAREN KAFADAR

20 strata of urbanicity are chosen to contain roughly similar counts. For counties within these strata, age-adjusted lung cancer mortality rates are combined to form stratum rates. Because an urban environment in Montana may be very different from the perception of urbanness in New Jersey, the country is divided into 7 regions, and urban stratum rates are calculated separately for each region. These stratum rates are then smoothed using a univariate smoother, and the regional smoothed urban trends are shown in Figure 1. For many regions, the stratum mortality rate increases with increasing urbanicity, particularly for the North Central region but less so for the Southeast and Rockies regions. (The trends are even stronger for white females; see Kafadar et al. (1996) for these trends as well as for details on the fitting and smoothing methods.) Using these smoothed stratum rates, we now form residuals, or urbanand age-adjusted rates: observed county rate/smoothed urban strat u m rate. These new rates are smoothed using headbanging and then

Figure 2: Lung cancer mortality among white males, 1973-1987, in the northeast, (a) Smoothed age-adjusted rates; (b) Smoothed age- and urbanadjusted rates. mapped for the states in the Southeast (Figure 2) and the Northeast (Figure 3). Consistent with the maps of the 1970-1980 rates, adjusted for age only (Pickle et al. 1987, pp. 76-79), Figures 2a and 3a show noticeably high rates around New Jersey and the Chesapeake in the northeast, and in

GEOGRAPHICAL T R E N D S IN CANCER MORTALITY

225

much of the southeast, particularly around the Gulf Coast, central Georgia and northern Florida, and eastern Virginia. After adjusting for urbanicity in the northeast (Figure 2b), rates around the Chesapeake Bay are high both before and after the urbanicity adjustment, but the apparently moderately high rates in Maine and Vermont before adjustment (Figure 2a) become extremely high in Maine and extremely low in Vermont (Figure 2b), i.e., in the highest and lowest quintiles, respectively, among the 240 counties in this region. The map suggests that the rates are unusually high in Maine, given the relatively low urbanicity in its counties. Having adjusted the rates in the southeast for urbanicity (Figure 2b), the areas in Alabama, Georgia, and Florida are still high, but none in Virginia, while a few isolated counties in Tennessee now appear unusually high for their levels of urbanicity.

F i g u r e 3: Lung cancer mortality among white males, 1973-1987, in the southeast, (a) Smoothed age-adjusted rates; (b) Smoothed age- and urbanadjusted rates.

4.2

N o n w h i t e p r o s t a t e cancer mortality and ethnic composition

Urbanicity plays a slight but much less important role in prostate cancer mortality (Pickle 1987, p.15). However, a map of the age-adjusted rates for nonwhites in U.S. counties, 1973-1987, shows an increasing gradient from the west to the east, a trend that holds up even within individual regions

226

K A R E N KAFADAR

Region Northeast Great Lakes Southeast North Central South Central Rockies West Coast

Percent of nonwhite population classified as: Native Asian Other* AfricanAmerican American American 8.5 13.8 76.5 1.2 6.1 8.0 83.8 2.2 2.3 93.9 2.3 1.6 9.0 70.1 13.4 7.5 3.3 74.2 4.3 18.2 9.3 17.8 47.2 25.6 23.2 32.2 7.6 36.9

*Note: Other is calculated as: Total Pop — White Pop — Afro Pop — Native Pop — Asian Pop Total Pop - White Pop Table 1: Ethnic Population in U.S. Regions (1980 Census)

of the country. Figure 4 shows the rates in the Great Lakes and Southeast regions regions which demonstrate patches of high rates in the eastern, as opposed to western, parts of the regions. Rates in Wisconsin, Michigan,

• '• •! :1 "^

50 -99 49.52 ί?.06 46.73 39.69

53.80 (125) 50.98 (127 49.51 130) 47.86(129) 46.69(125)

F i g u r e 4: Smoothed rates of prostate cancer mortality in nonwhites, 19731987. (a) Great Lakes region; (b) Southeast region.

227

GEOGRAPHICAL T R E N D S IN C A N C E R MORTALITY

and northern Illinois appear particularly low (Figure 4a), as are those in southern Alabama, much of Georgia, and coastal Florida (Figure 4b). Pickle (1992) suggested that the ethnic composition of the nonwhite population varies substantially across the United States. In the west, a good proportion of the nonwhites are Asian, an ethnic group whose prostate cancer mortality is known to be quite low, whereas in the east, the nonwhite population is primarily African-American, a group which has among the highest rates in the world (Table 1). Thus, a useful adjustor in this case is ρ = percent of the nonwhite population classified as African-American. Stratum rates for this variable are roughly linear in p, except at the extremes of the interval (Figure 5): f(p)

=

15.16 + 31.29 -p 0.0 < ρ < 0.98

=

42.81 0.98 < ρ