A Selection of Early Statistical Papers of J. Neyman [Reprint 2020 ed.] 9780520327016

137 34 39MB

English Pages 440 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

A Selection of Early Statistical Papers of J. Neyman [Reprint 2020 ed.]
 9780520327016

Citation preview

A S E L E C T I O N OF EARLY S T A T I S T I C A L P A P E R S OF J. N E Y M A N

JERZY NEYMAN (when many of these papers were written)

A SELECTION OF EARLY STATISTICAL PAPERS OF

J. NEYMAN

U N I V E R S I T Y OF C A L I F O R N I A P R E S S Berkeley and Los Angeles 1967

UNIVERSITY OF CALIFORNIA PRESS B E R K E L E Y A N D LOS A N G E L E S , C A L I F O R N I A

LIBRARY OF CONGRESS CATALOG CARD NUMBER: 67-21433 Set in Monophoto and printed by J. W. Arrowsmith Ltd., Bristol, England

CONTENTS Frontispiece Foreword page

Author's Note 1

Sur un théorème métrique. Fundamenta Mathematicae,

2

Contribution to the theory of small samples. Biometrika,

3

Further notes on non-linear regression. Biometrika,

4

On the correlation of the mean and the variance. Biometrika, (1926)

5

Sur les lois de probabilité qui tendent vers la loi de Gauss. Comptes Rendus de VAcad. des Sci., Paris, 182, 1590-2 (1926)

31

6

Sur une propriété de la loi de probabilité à laquelle obéit le coéfficient de variation. Comptes Rendus de VAcad. des Sci., Paris, 183, 107-9 (1926)

33

7

Méthodes nouvelles de vérification des hypothèses. Comptes Rendus du I Congres de Mathématicients des Pays Slaves, Warsaw, 355-66 (1929)

35

8

Contribution to the theory of certain test criteria. Bull. Int. Stat. Inst. 3-48 (1929)

43

9

Counting virulent bacteria and particles of virus. (With K. Iwaszkiewicz.) Acta Biologiae Exp. 6, 101^12 (1931)

71

10

O n the two different aspects of the representative method. J. Royal Stat. Society, 97, 558-625 (1934). (Spanish version of this paper appeared in the Estadística, J. Inter-Amer. Stat. Inst. 17, 587-651 (1959).

98

11

O n the problem of confidence intervals. Annals of Mathematical 111-6(1935)

12

Sur la vérification des hypothèses statistiques composées. Bull. Soc. Math. de France, 63, 346-66 (1935)

147

13

Statistical problems in agricultural experimentation. ( With K. Iwaszkiewicz and St. Koiodziejczyk.) (Paper read before the Industrial and Agricultural Section of the Royal Statistical Society, March 28, 1935.) J. Royal Stat. Soc. Suppl. 2, 107-80(1935).

160

14

Statistical studies in questions of bacteriology. Part I. The accuracy of the 'dilution method'. (With T. Matuszewski and J. Supiñska.) J. Royal Stat. Soc. Suppl. 2, 63-82 (1935)

209

15

Complex Experiments. Contribution to the discussions of the paper by F. Yates, J. Royal Stat. Soc. Suppl. 2, 235-42 (1935)

225

5, 328-30 (1923) 17, 472-9 (1925)

18, 257-62 (1926) 18, 401-13

Statistics, 6,

1 3 13 19

142

vi

Contents

16

The labour bill and output of arable farms. Contribution to the discussion of the paper by R. McG. Carslaw and P. E. Staves, J. Royal Stat. Soc. 97, 627-35 (1935)

17

Errors of the second kind in testing 'Student's' hypothesis. (With B. Tokarska.) J. Amer. Stat. Assoc. 31, 318-26 (1936)

238

18

La vérification de l'hypothèse concernant la loi de probabilité d'une variable aléatoire. Comptes Rendus de l'Académie des Sci., Paris, 203, 1047-9 (1936)

246

19

Sur la loi de probabilité limite d'un système de variables aléatoires. Comptes Rendus de l'Académie des Sci., Paris, 203, 1211-13 (1936)

248

20

Outline of a theory of statistical estimation based on the classical theory of probability. Phil. Trans. R. S. of London, Ser. A, No. 767, 236, 333-80 (1937) 'Smooth' test for goodness of fit. Skandinavisk Aktuarietidskrift, 20, 149-99 (1937)

250

21

page 233

291

22

Contributions to the theory of sampling human populations. J. Amer. Stat. Assoc. 33, 101-16 (1938)

320

23

L'estimation statistique traitée comme un problème classique de probabilité. Actualitées Scientifiques et Industrielles, No. 739, 25-57 (1938)

322

24

On statistics, the distribution of which is independent of parameters involved in the original probability law of the observed variables. Statistical Research Memoirs, 2, 58-9 (1938)

354

25

On a new class of 'contagious' distributions, applicable in entomology and bacteriology. Annals of Mathematical Statistics, 10, 35-57 (1939)

356

26

Fiducial argument and the theory of confidence intervals. Biometrika, 128-50(1941)

32,

375

27

U n theorème d'existence. Comptes Rendus de VAcad. des Sci., Paris, 222, 843-5(1946)

395

28

Contribution to the theory of the chi-square test. Proc. Berkeley Symposium on Math. Stat, and Prob. 1945 (J. Neyman, Editor), University of California Press, Berkeley, 1949, pp. 239-73

397

Appendix: Bibliography of Scientific papers by J. Neyman

424

FOREWORD This volume is one of a trilogy comprising a collection of selected scientific papers by Professor Egon S. Pearson, a volume of joint contributions by Professors Jerzy Neyman and Egon S. Pearson, and a volume of selected publications by Professor Jerzy Neyman. In selecting the papers, an attempt was made to include those contributions which are most representative of the scientific activities of the authors, taking into account the fact that now many, if not most, of the early papers are not readily accessible. As can be gathered from the list of papers included in these three volumes, the contributions by Neyman and by Pearson cover an extremely wide and varied domain. It is not possible to give here an abridged description of their content, nor is it feasible to list those papers by other authors which have continued the work initiated by Neyman and Pearson. Such a list would encompass a major portion of the contemporary statistical literature and would require a separate volume. Nevertheless, it appears necessary and appropriate for those of us who have had the privilege of being Neyman's students to offer some comments on the place of the papers in the history of statistics and on their potential value to the reader. To view the papers in proper perspective one must turn back to the years preceding 1925 when the statistical scene was dominated by Karl Pearson and R. A. Fisher. A recollection of the struggles which led to the elaboration of the Neyman-Pearson theory of testing hypotheses (now called simply 'the theory of testing hypotheses') can be found in a paper by E. S. Pearson entitled 'The Neyman-Pearson Story 1926-34' (Research Papers in Statistics, F. N. David, editor, John Wiley and Sons, London, 1966). A further struggle led to Neyman's theory of confidence intervals (paper 56 in the bibliography at the end of this volume). The main steps in the development of the theory of testing hypotheses seem to have been (1) a clear demonstration of the logical impossibility of building a theory using only the then prevalent principles of statistical inference, (2) the introduction of the concept of power and the proof of the Neyman-Pearson fundamental lemma, and (3) the invention of techniques and concepts aimed at the solution of testing problems involving composite hypotheses. In 1933 (paper 34) the framework so constructed was further linked to the use of Bayesian methods and economic considerations leading to concepts (minimax, minimum average risk, etc.) which are now a familiar part of the theory of statistical decision functions. How novel these ideas must have appeared at the time and how difficult their assimilation must have been can be gathered from the fact that in the late 1950's R. A. Fisher was still protesting against their use. On the contrary the theory of confidence intervals seemed at first to be only a minor modification of Fisher's theory of fiducial inference. This possibility was roundly dismissed by Fisher himself. An exposé indicating the conceptual and numerical discrepancies between the two approaches is reproduced here (paper 69). The concepts of confidence intervals and of the Neyman-Pearson theory have proved immensely fruitful. A natural but far reaching extension of their scope can be found in Abraham Wald's theory of statistical decision functions. The elaboration

viii

Foreword

and application of the statistical tools related to these ideas has already occupied a generation of statisticians. It continues to be the main lifestream of theoretical statistics. Of course, the discussions concerning the foundations of statistics have not terminated. In fact they have proceeded with renewed vigour during the past few years. Thus, a reprinting of major substantial contributions in the field appears particularly timely. However, we would like to emphasize that the scientific and practical value of the papers reprinted here completely overshadows whatever disagreement may remain on more philosophical grounds. The volume of selected papers of J. Neyman includes only contributions published or written before the early 1940's. It is hoped that later contributions will eventually be gathered in another volume. Nevertheless, the papers collected here already illustrate another valuable aspect of Neyman's approach to statistical problems. Several of the papers are purely mathematical in nature and many are devoted to questions of pure theory. However, several papers discuss applications of statistics to substantive fields related to agriculture or biology. Later papers, not included here, extend this domain of activity to astronomy, meteorology and other physical sciences. The interesting feature of the approach used by Neyman is that, in all these papers, the substantive problem is discussed per se and a mathematical model of the phenomenon is constructed. An effort is then made to derive from the structure of the mathematical model new statistical methods particularly adapted to the solution of the problems under consideration. Mere application of standard statistical techniques does not occur in these or later papers. In conclusion we hope that the present collection will be as valuable a source of inspiration to others as it continues to be for us. We also hope that in the near future it will be possible to complete the collection by adjunction of later papers. For permission to reproduce certain papers we are indebted to the Biometrika Trustees, to the Editors and Publishers of Annals of Mathematical Statistics, Journal of the Royal Statistical Society, and Journal of the American Statistical Association, the Directors of the Librairie Scientifique Hermann, Paris, and the International Statistical Institute, to the Editors and Publishers of Fundamenta Mathematicae, and Skandinavisk Akt. Tidsk., to the Council of the Royal Society, and to the Director Imprimerie, Gauthier Villars, Paris. Students of J.N. at Berkeley Berkeley October 6, 1966

AUTHOR'S NOTE All my research papers are a reflection of many divergent influences I experienced over the years. First I have to acknowledge the influence of Serge Bernstein, my teacher in probability. From him I tried to acquire his tendency of concentrating on some 'big' problem. I am not sure that I was successful in this, but while hunting for a big problem I certainly established the habit, contrary to that of my teacher, to neglect rigour. Regretfully, the subsequent influence of Waclaw Sierpinski was not sufficient to counteract this habit. Next, I must acknowledge the lasting influence of Karl Pearson, particularly of his Grammar of Science, which I read at the suggestion of Bernstein and which proved a revelation. Another book that left an indelible imprint on my thinking was Lebesgue's Leçons sur Vintégration..., the most beautiful monograph that I ever read. It was suggested to me by Professor C. K. Russyan, another of my university teachers, under whom I had my first contacts with (¿, e) reasoning and who pushed me to study measure theory. At the time, in about 1915, no connexion between measure on the one hand and probability and statistics, on the other, was in sight. Subsequent decades proved differently. My thinking, and undoubtedly my writings, are strongly influenced by contacts I had with Antoni Przeborski, whose adogmatism and somewhat irreverent Anatole France like skepticism I learned reverently to admire, first in Kharkov as Przeborski's student and later in Warsaw as his assistant. Later on, in the late 1920's and early 1930's, there came the influences of Emile Borel and of Richard von Mises. A few casual remarks of Borel in his book Le Hasard brought out the idea that there may be something mathematical in statistics, in fact that there may be a 'big' problem. Also, the Wahrscheinlichkeit, Statistik und Wahrheit of von Mises, which I helped to translate into English, confirmed me as a radical 'frequentist,' intent on treating probability as a mathematical idealization of relative frequency. However, von Mises' definition of probability did not attract me and I became a follower of Kolmogorov. The remarks of Borel, on 'fonctions en quelque sorte remarquables,' that may perhaps be used for statistical tests, combined nicely with what I learned from Egon S. Pearson, the essence of statistical problems seen, so to speak, from the inside. There followed a period of an exciting and extensive cooperation on the incipient theory of testing statistical hypotheses, which resulted in the joint papers reproduced in the second volume of the present series. My hope was that the cooperation will extend to cover interval estimation. A separate reference is due to R. A. Fisher. Even though, in a quarter of a century long dispute I combated certain views of Fisher, there is not the slightest doubt that his many remarkable achievements had a profound influence on my own thinking and work. Last but not least I wish affectionately to acknowledge the influences of my students, whose many searching questions I tried to answer and whose skepticism I had to face. J . NEYMAN

Sur un théorème métrique concernant les ensembles fermés PAR J E R Z Y N E Y M A N L'objet de cette N o t e est un théorème métrique, vrai pour les ensembles linéaires (fermés), mais ne subsistant pas pour les ensembles plans. C'est le théorème suivant. Soit F un ensemble linéaire fermé et borné de mesure nulle, 3F une famille d'intervalles, telle qu'il existe pour tout point p de F et tout nombre positif e un intervalle ô de 3F de longeur 0 un nombre fini d'intervalles ¿¡(i = 1,2,... n) de 3F, recouvrant F et dont la somme de longueurs est f+

00

y(y-P„)f(x,y)dxdy.

(17)

* — oo * — oo

Using in the ordinary notation crv for the standard deviation of y, or putting /•+ CO /»+ 00

a

2

=

y2f(x,y)dxdy, » - 00 " - 00

(18)

and dividing the two parts of (17) by 2, we have Z„2 = (T2.-a0/.0-al/.i-...-akAk-...-a„Ak.

(19)

Comparing this expression with (12), we see that to get I 2 we have only to put into (11) c 2 instead of y and Ak instead of xk(k - 0. 1.2 ... n), and to divide by the coefficient

16

Further notes on non-linear

regression

of y, i.e. by — A„. In this way we have _2 Oy, X0, ylj , Ho, J_ A.

Hi, n2,

¿1,

Hn,

K, or

... •• ..

Hn+l,-«

...

Hk,

Hk+1, ••• Hn+l

2 = S ' i l -H n),

0

Hi =

VyK

Ho,

^•k, I1k,

(20)

Hn + k, ••• H2n (21)

where ~~ ^o»

nn

...

>

•••

>

• • • Hk,

• • •

Hn (22)

Hk+1 » • • • Hlk,

Hk + n

~ Hn, Mn+l, ••• Hn + k, ••• Hln which is the expression of the n-th correlation ratio we wanted to find. It is easy to see that for n = 1, we have ;2 H 2 -_ (23) 72TT' which is the expression of the ordinary coefficient of correlation squared, r\r* We see further, as the equation of the n-th regression parabola (11) can be also written in the following manner, 0}

15

x y 4

y =

é»é x j

Mo> A ! >

^2,

Hk,

Hk + 2 ,

Hk + 1,

x

••• Hk,

• • • Hn

^2)0

^- = ^(say), ••• /^/c + n

(24)

' , ¡J.„, H„+ ! , fin + 2 , • • • Hn + k, ¿«2 H that, having the equation of the n-th regression-parabola, we get at once the expression of Hi by putting Hi instead of y and XJa2 instead of the k-th power of x (k = 0, 1,... «). It is necessary to remember that x and y are measured from their means. Expansions of P„ and Hi We shall find it of interest to calculate the general expression for the difference QH = Pn-Pn-l. (25) As P0 = 0, we shall be able to express the right-hand side of the equation of the n-th regression-parabola in the form of a sum Pn = Ql+Ql

+ - + Q.-

(26)

* As H i is nothing but the most natural generalization of the coefficient of correlation squared, it would be perhaps better to call it 'the n-th coefficient of correlation'. Unfortunately this name is adapted for what I should call 'the coefficient of correlation between n-th powers of variates'.

17

J. N e y m a n

In this way we shall reach the expressions for successive regression-parabolas, without any appeal to the theory of continued fractions and of orthogonal functions. F o r n ^ 4 these expressions have been given by Professor K. Pearson in Biometrika, loc. cit. W e shall need here the following property of determinants. Let D be a determinant and D rs...t\ij...k the determinant which we obtain f r o m D by cancelling the r-th, s-th ... t-th rows and the same n u m b e r of columns, namely the i-th, j-th ... k-th columns. (27) We have £xDlm|lm = Di\iDm]m-DllmDm]1, where m > 1. If £>!!! a n d D l m | l m are not equal to zero, we can write

D

Dm]m _

^111

D^D^

^lmlln

(28)

^lll^lmllm

the identity which we shall apply to our expressions of Hi a n d Pn. Turning back to (25) a n d putting A' instead of A'„, we can now write Q^PN-P»^

A'

A:n\n

A'iii

A' l n i l n '

and, using (28),

Ho, Hi, Hi, H2,



• Un••

1

Hn

1 y» ~

A

(29)

1,

x,

x2,

Ho,

Hl,

¿«2.

Hn~l,

Hn,

Hn + l ,

• . . x" .. n„

X

A

Hn, Hn + l, ••

Pln-l

• ••

H2n-

1 (30)

or changing the columns of the first determinant into rows and changing the sign of the

Ho,

Hl,

¿2, H2,



.. n„

Ho,

1 Qn

=

x\

1,



Hl,

H2,

Hn,

Hn+l,

. . X" •

Hn

X

A.-A Hn-

1'

Hn,

Hn+l, •

••

H2n-l

Hn-

l,



• H2

n-1

(31) We see that the second determinant can be obtained from the first one by putting xk for Xk (k = 0, 1... n). T h e first determinant we shall denote by V„, and the second by Vn(x)- N o w the equation of the n-th regression-parabola can be written as follows:

- = VjViM J V2V2(x) AqA!

A,A 2

|

"'

|

VnVn(x) A„_ !A n '

(32)

Applying the same m e t h o d to (22), we easily find the expansion of H i , namely,

Hi =

v\

vi

_A0A!

A,A 2

A n _ ,A„

(33)

As the calculation of higher parabolas and correlation ratios needs the knowledge of the higher m o m e n t s of the sampled population, the above results will be of interest

Further notes on non-linear regression

18

in theoretical research, where the higher moments may be supposed to be known. Such a case arises, for instance, when we consider the dependency of the standard deviation upon the mean of a sample. The expansion of P„ can be used to find the necessary and sufficient conditions which the constants / and fx must satisfy in order that the actual regression line may be represented by an n-th order parabola. Assume that

y = P„

(34)

represents the actual regression line. Then, using (24), we find

f+ CO _ , K = I p(x) dx=

-

1

0>

Hk>

A

0i

Vo,

A1,

Hi,

Hk +1 > • • •

+k

Hi,

(35)

n-> / i n + 1 > • • • M2n

where the coefficient of ptn+k is not zero, and this is the necessary condition required. Assuming that the expansion (32) is permissible, we shall now prove that this condition is sufficient. In fact, assume that all the ?.ks are linear functions of fik, fik+ ... fin+k represented by (35). Considering F n + S , where s ^ 1, we see that the terms of the first line of this determinant can be represented by the same linear and homogeneous function (35) of corresponding terms of the n+ 1 following lines of the same determinant. We conclude that Vn+S = 0. Now if we consider V„ and if we multiply the terms of its i-th row by the coefficient of n k + 1 - 2 (t = 2, 3 ... n+ 1) in (35) and subtract the products from the terms of the first line, we shall see that owing to (35) the determinant V„ will be transformed into the product of A„ into the coefficient of ju n+k in (35), both of them being different from zero. Therefore V„ # 0. As the coefficient of x" in F„(x) is A„_, ^ 0 , we see that if all the A's satisfy the conditon (35), where the coefficient of n n + k is not zero, the actual regression line is represented by an n-th order parabola. The general condition established can be considered as the generalization of respective formulae given by K. Pearson for n ^ 4 (loc. cit.), and by A. A. TchouprofT* for n < 2.

Reprinted from Biometrika,

* Grundbegriffe u. Grundprobleme der Korrelationstheorie,

18, 257-62, 1926.

Teubner (1925), pp. 4S--9.

On the correlation of the mean and the variance in samples drawn from an 'infinite' population BY J E R Z Y

NEYMAN

The relation between the two characters in a sample, mean x and standard deviation a (or variance a 2 ), is a very interesting one as several methods of judging whether a certain sample has been taken f r o m a given population, or if two different samples have been taken from the same population, are based on the comparison of certain functions of x with certain functions of oo. Afterwards we shall d r a w attention to a method of judging whether a sample is likely to have been taken f r o m a population whose distribution is supposed to be known. W e shall use a notation like that in the previous paper, namely: fi k will denote the fcth moment of the sampled population a b o u t its m e a n ; Xk—the mean value of (a2 — c2)xk and n—the number of individuals in the sample. It is convenient to measure the deviations of the two variates a 2 and x from their respective means in terms of their standard deviations, namely

Adopting these units we shall put in our formulae for the higher parabolae of regression, instead of Xk, the expression

* Biometrika, 6, 1 et seq. t Biometrika, 18, 257-62. Preceding article in this collection, t Biometrika, 13, 296-300.

20

On the correlation

of the mean and the

variance

The deviation of the mean in any sample from its mean value in samples we shall denote by x' and the ordinates of the parabola of regression measured from the mean of a 2 by u'; when x' and u' are measured in terms of their standard deviations as given in (1) we shall term them x and u. We shall also need the usual frequency constants p for the sampled population and for the population of means of samples taken from the former. The first we shall denote by p2,... and the second by Bl, B2,.... The formula for the second order parabola of regression can be taken directly from the paper of K. Pearson (loc. cit.), and applying it to the variates under consideration we have u - q r X + ^ - Z j & t f - j B l x - 1).

(3)

It remains to express all the constants qk and the B's in terms of the /Ts. All necessary formulae are contained in my paper in Biometrika, 17, 472-9. We have

H l

=

q i

=

P

' =

^ ^ J(n-l)p2-n

+3

n-! n — 1, la fonction

F „ ( X J est

infinie pour X„ -> 0. THÉORÈME 3 .

Si toute fonction f est au voisinage de V origine de la forme

=

(¿ = 1.2, ...,*),

n où ji sont constants et leur somme £ y( = n — 1, et où la fonction q>i tend vers une limite i= 1 finie et positive lorsque x 0, la fonction F„ est au point x = 0 finie et positive.

Lorsque toutes les fonctions f sont infinies à la même limite de l'intervalle où elles sont positives, la fonction F„ peut l'y être aussi. Les conditions imposées aux fonctions/ ne les empêchent pas de satisfaire aux conditions du théorème de M. P. Lévy (loc. cit.) et, par conséquent, la loi de probabilité F„ peut tendre vers la loi de Gauss.

R e p r i n t e d f r o m Comptes Rendus de l'Acad. des Sci., Paris, 1 8 2 , 1 5 9 0 - 2 , 1926.

Sur une propriété de la loi de probabilité à laquelle obéit le coefficient de variation PAR JERZY NEYMAN Comme le coefficient de variation est le plus souvent considéré en anthropométrie où règne la loi de Gauss, il semble tout naturel de déduire sa loi de probabilité en supposant que la loi de probabilité à laquelle obéit le caractère considéré est celle de Gauss. Cependant, après avoir effectué le simple calcul nécessaire, on trouve que la loi de probabilité du coefficient de variation est représentée par une courbe avec deux points modaux et a y a n t t o u s s e s m o m e n t s p a r r a p p o r t à F a x e OY s o i t i n f i n i s , s o i t i n d é t e r m i n é s , propriété impossible à interpréter dans des questions d'anthropologie. On voit que, quoiqu'il semble exister une parfaite correspondance entre la loi de Gauss et la distribution d'un caractère anthropométrique, cette correspondance a certaines limites, au delà desquelles, pour avoir des résultats comparables aux faits de la pratique, il est nécessaire de considérer une loi de probabilité théorique autre que celle de Gauss. Le but de cette Note est d'indiquer les conditions qu'il faut imposer à la loi de probabilité à laquelle obéit le caractère considéré pour que la difficulté mentionnée ne se présente pas. Soit x j , x 2 , . . . , x „ un système de n variables représentant les valeurs d'un caractère chez n individus pris au hasard d'une 'population'. Soit x la moyenne arithmétique des nombres x ; (i = 1,2,..., n ) et a 2 leur déviation moyenne quadratique a

2

= ^n i ( x i - x ) != 1

2

(1)

.

Alors le coefficient de variation est donné par la formule v = a j x . Soit enfin / ( x ) la loi de probabilité à laquelle obéit la variable x t (i = 1,2,..., n ) , que nous supposerons positive pour a < x < b et nulle pour x ^ a et b < x. Dénotons par F(v) la loi de probabilité à laquelle obéit v et par n k son k ' e m e moment par rapport à l'axe de OY. Évidemment on a M* = [+Q° F(v)v" d v = f* '00 v" d v f Ô /(*«) oo — oo •'n(v)1-=1

(2)

où iî(v) désigne le domaine à n — 1 dimensions, qui satisfait à la condition v = a j x , a et x étant des fonctions indiquées de x t , x,..., x„. En effectuant successivement les deux transformations (fc = 1.2,.... n) et

«—2 ^ = x v n cos (Pi, ¡=1

où 0 < q>x < FC) =

= XV sin

n—2 f] cos^ i=k

,3,

(k = 2 , . . . , n - l ) ,

(4)

-rc/2 ^ ( p „ ^ 7t/2 (n = 2, 3,...,«-2), on trouve, pour v > 0, „ roo r+"l2 f+Ji/2 r2x J o X"" 1 d x j o d=i

n i 2 = t {x-aAsfnf ¡=1

(12)

^ = £ xf-n? ¡=1

(13)

On voit aisément que le caractère z est un cas spécial de Ç, correspondant à a, = l / x / n pour i = 1,2,... n. Nous allons maintenant déduire la loi de probabilité de Ç qui est indépendante des nombres (10). La loi de probabilité déterminée par l'hypothèse H et concernante l'ensemble des variables (2) est donnée par Const x exp