Analysis and Design of Experiments: Analysis of Variance and Analysis of Variance Designs

833 127 5MB

English Pages [212] Year 1949

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Analysis and Design of Experiments: Analysis of Variance and Analysis of Variance Designs

Citation preview

experiments l

H. B. Mann analysis of variance

and analysis

of

variance designs

Professor of Mathematics Ohio State

University

AND DESIGN OF EXPERIMENTS

ANALYSIS by H.

book

This

Mann

B.

mathematically rigorous extensive discussion of an

a

is

important area in modern mathematics: design of experiments, or the analysis of variance and variance designs as statistical procedures. Emphasis is upon rigorous mathematical treatment, including proofs, formula derivation, and principles of statistical inference.

The

first six

chapters of this book cover the theory of the analysis full discussion of chi square distribution, F distribu-

of variance, with

tion functions, analysis of variance in

one-way and r-way

classifica-

and

distribution of variance ratio when the null hypothesis is with inclusion of Tang's tables. Experimental design is treated in chapters on Latin squares and incomplete balanced block designs, Galois fields and orthogonal Latin squares, construction of incomtion,

false,

plete balanced block designs, factorial experiments,

designs,

blocks,

are considered

in

and randomized

and a

quasi-factorial designs. Non-orthogonal data separate chapter, while analysis of covariance,

interblock estimates and variance are also considered.

volume has been directed toward three groups of readers: mature mathematicians who wish a knowledge of the subject; graduate or undergraduate classes; and practicing experimenters and statisticians. While treatment is clear, a knowledge of probaThis

bility calculus

This excellent

and matrix theory

work

is

will

be helpful to the reader.

one which every mathematician

in

any way

interested in the foundations of experimental design will find both useful and stimulating/' AMERICAN STATISTICAL JOURNAL. "Exposition is admirably clear throughout ... the book should prove

both

useful

and

stimulating,"

QUARTERLY OF APPLIED MATHE-

MATICS. 14 pages of useful tables.

195pp.

5%

x SI

8.

80

Paperbound

$ 1.45

ANALYSIS AND DESIGN OF EXPERIMENTS

Other Dover Series Books in Mathematics & Physics theory of sets by

E. Kamke. Translated by Frederick Bagemihl, University of Rochester.

statistical mechanics by A. Khinchin. Translated by G. Gamow, George Washington University.

PROBLEM BOOK IN THE THEORY OF FUNCTIONS, Volume

I: Problems in the Elementary Theory of Functions by Konrad Knopp. Translated by Lipman Bers, Syracuse

University.

INTRODUCTION TO THE DIFFERENTIAL equations of physics by L. Hopf. Translated by Walter Nef,

University of Fribourg.

a concise history of mathematics by Dirk

J.

Struik, Massachusetts

Technology.

Institute of

ANALYSIS AND DESIGN OF EXPERIMENTS Analysis of Variance and Analysis of V ariance Designs

MANN

H. B. PROFESSOR OF MATHEMATICS, THE OHIO STATE UNIVERSITY

NEW YORK

DOVER PUBLICATIONS,

INC.

THE DOVER SERIES IN MATHEMATICS AND PHYSICS w. p eager, Consulting Editor

Copyright 1949

By Dover

Printed

Publications, Inc.

& Bound

in the

U.S.A.

To Harold

Hotelling

Contents CHAPTER

PAGE

Introduction

ix

Chi-square distribution and analysis of variance distribution

1

Matrices, quadratic forms and the multivariate normal distribution

6

Analysis of variance in a one

way

classification

....

Likelihood ratio tests and tests of linear hypotheses

.

22

.

.

47

.

.

76

construction of incomplete balanced block designs

.

107

Analysis of variance in an

The power

way

r-

classification design

of analysis of variance tests

Latin squares and incomplete balanced block designs * Galois fields and orthogonal Latin squares

The

16

.

Non-orthogonal data

designs,

87

130

Factorial experiments

Randomized

61

139

randomized blocks and quasi-

factorial designs

155

Analysis of covariance

169

Interblock estimates and interblock variance

171

Tables

181

vii

Introduction

The

idea to design experiments systematically and with a view to their statistical analysis was first promoted by R. A. Fisher in his well known book “The Design of Experiments”. Fisher also proposed the majority of the designs discussed in the present volume. Several designs of great importance, notably the quasifactorial designs and the incomplete balanced block designs, were discovered by F. Yates. R. A. Fisher’s book, however, as well as other publications by R. A. Fisher and F. Yates and their school are not written for mathematicians. Thus the main emphasis is placed on the explanation of the procedure with little or no attention being paid to a mathematical formulation of the assumptions and to the principles of statistical inference which lead from the assumption to the statistical method. Moreover, also in many other important papers on analysis of variance and design of experiments proofs

and derivations of formulae are barely sketched if not totally omitted. The present book tries to fill this gap and the main emphasis

ment

is

therefore given to a rigorous mathematical treat-

of the subject.

In writing this volume the author had in mind a reader with a mathematical background of a student, who majors in mathematics and is in his senior year. References are given whenever the text exceeds this background.

The book is designed to serve three different purposes. First, was intended to enable a mature mathematician with no background in statistics to study the analysis of variance and it

analysis of variance designs within a reasonably short time. it is intended to serve as a text book for a graduate or advanced undergraduate course in the subject. Finally, it is hoped that this book will be studied by practical experimenters

Secondly,

and

statisticians who wish to study the mathematical methods used in the analysis of variance and in the construction of ix

* analysis of variance designs

and are

and able to expend

willing

the time and effort necessary for this purpose. thanks are due to the Iowa State College Press for their

My

kin d permission to include in this

book the

tables of the F-dis-

Snedecor’s “Statistical Methods” and to the Department of Statistics, University of London, University College for their kind permission to republish P. C. Tang’s

tribution of G.

W.

power function of the analysis of variance test from the second volume of the “Statistical Research Memoirs”.

tables of the

I

am

indebted to Mr.

in reading the

knowledge

my

a very helpful

Ransom Whitney who has

manuscript and the proofs. indebtedness to Professor letter.

assisted

me

wish to acG. Cochran for

I also

W.

CHAPTER

I

Chi-square Distribution and Analysis of Variance Distribution In this chapter

certain fundamental concepts of the probability calculus are used. The reader who is not acquainted with these concepts should first acquire the necessary background by reading, for instance, Uspensky's, “Introduction to Mathematical Probability,” Chapter XII. Sec. 8, example 3, Chapter

XIII. Secs. 1-4 and 6, Chapter XV, Secs. 1-6. Let ai! x N be normally and independently distributed We wish to calculate •





,

,

variables with variances 1 and means 0. the distribution of the expression

=

2

X

The

+

x\

+

xl

joint distribution of x,

+

•••

xl

xN

is

exp [-(x?

+



• •

,

,

.

given by the prob-

ability density function,

P(x,



,

Hence the





,

=

xN)

2

X is





+

2

xn)/2\.

=

x\

+







+

0.

J

Otherwise the second equation would be a multiple of the first, contradicting the assumption of independence of the system (4.16).

The systems 53

a

V-

adding over

b v g va

obtain

32 )

'Ziy*

The quantity Y a

is

(4 .

+

,

- Y a)Y a =

0.

a

variables g la

g pa

,

,

called the regression value of y a

E

(ya

-

33 )

then given by

is

E

= Z(Va- Y a )y a a

a

(4 .

E yl - E n

=

a

.

a

Q =

.





,

,

g va

under

.

"aE

4 34)

~ Y a)Y a

(Va

a

Let now F* be the regression value of y a on g la the restrictions 4.6 and 4 7 Then similarly (

on the

.

The minimum Q a under the assumptions

r

yl

-

E Yf. a

Hence (4 .

35 )

Qr

-

Qa

=

E (Yl

-

Yf).

a

The

may (

4 7 ) are equivalent to stating that /3, be expressed by p — s parameters y, y„_,

restrictions

(





,

4 36) .

/

3
>y i

i

=

1 , •••



,

,





,

.

,P-

i

Let

Ci

Then

• ,





,

Cp-,

be the

maximum

likehhood estimates of y,

.

)

)

32

E

y * = Ec,

(4.37)

»..*.«

.

t

i

Multiplying the fth of the equations (4.30) by k tl c summing over t and l we obtain

Z

(4.38)

- Y a)Y* =

(y a

t

and

0.

a

Z° E(y« a

Since also

(2/«



=

Yt)Yt

=

y*)F*

we obtain

0

Z(2/«

-

y*)y;

a

- Z(y« - yjyj = a

o.

Hence

E

(

Ya -

Y*)

2

=

Z(^

- Y*)Y a

=

E y«

-

(4.39)

E

Y* 2

.

a

a

Therefore (4.40)

E (Y

- Qa =

Qr

a

-

2

y;)

a

and

F

(

N-p «

EEa

(r. (

Va

- H) - y.:

2 2

(4.41)

=

at

-

y

s

Z« Z«

r*« 2



- Z« y * - Z*

2

Testing a linear hypothesis means essentially testing the significance of the coefficients of a regression equation.

From

(4.33)

E a

and

(4.39) it follows also that

- Y a) = 2

(Va

E yl - E a

Yf

a

(4.42)

-

Z(Y

a

This result can easily be generalized to yield

Y*)

2 .

1

33

Theorem means

the

H

4.2: Let



l



H,



,

,

of the variables y a with

a sequence of hypotheses on

be

=

E(y a )

y a of the form.

V

Hi



Ha

l

Qiafii

y

*-

H

Hi

:

t

&

H



2



&



£

H .i & t

a*,0,

=

0,

St

^

i

k

=

S(_j 4" 1)

** j

Y

each other. Let the hypothesis

1‘ ’

H,

£ yi

p

are independent of

obtained under

ya

then

z

=

(2/„

-

nT + £ (nu

+

n- -

+

nr

-

2>

a

a

a

(4.43) •••

£ a

Theorem

j

H,

be the regression value of

such that the linear restrictions imposed by

l)

(

i".’)

2

+

£

(

a

w

.

very useful in reducing the labor involved in computing sums of squares of deviations from a regression 4.2

is

value.

We now turn our We write

attention to the solution of the equation

(4.30).

£g

(4.44)

ia g ia

=

aa

£y

,

be

4.2: Let g



a g ia

=

at

.

a

a

Lemma

(

+

+

p3 x

to be tested

" is

Xa

(?3

,

& =

,

,

,

where

,

„ = f

N-

a

=

3-a

.

The first step squares. Then if cl, 0.

/3 2 03 by least c can be computed from the g ia square estimate of 0 3

to estimate 0,

3

bar is measured at different In terms of theorems 4.1 and 4.3

steel

3

3 b3

-TTq.

,

and

b3

is

is

then

=

cc

3 ,

the least

,

43

F

has the

distribution with

and

1

N —

3 degrees of freedom

respectively.

Example 3. We shall consider again the one way classification problem treated in Chapter 3. We assumed that we had taken a sample of n&’s from the first classification, n2 from the second and so on. Denoting by x„ the jth measurement in the ith class we have to consider the following linear hypothesis. Assumption E{xu)

The

=

for

/i


the regression value of x, f on Hi

is

7„ =

=

-

4.1

E*Ei

2 i

riiXi. 2

(

x ‘>)

,

•••

l,-*-,



(*«

2

m)

nt).

which

4.1 that

— nx - Ei «,x 2 2

.

the likelihood ratio statistic for testing our hypothesis. Example 4. We shall now treat the problem of a 2

classification.

As an example suppose that

different races receive s different diets

,

of all observations. It

and equation

E i

j

E* E>

mean x

=

way

r-s pigs from r

such that exactly one

44 pig of the •





t'th

race

=

(t

The purpose

s.

,

• •

1,



receives the jth diet j

r)

,

experiment

of the

two

is

fold.

We

=

1,

want

with respect to the weight gains and at the same time we should like to know if the different diets differ in their ability to produce weight gains.

to see

if

the pig races

differ

Our observations can be arranged

where

x,, is

%11

)

% rl

j

*

*

*

*

*

into a matrix

j

%la

i

%r$j

*

the weight gain of the pig from the tth race which

receives the jth diet.

We

assume now that the weight gain is produced by two and feed, both of which act independently of

factors, race

each other. Moreover we shall assume that the x

and n

Mi-

is

M-i

M,

a constant independent of

= X2

M-i

=

0,

the “effect” of the i

and j.

~





2

To find Q a we have to minimize X2. MiM-i m) (x a subject to the restriction of (4.82). The conditions of theorem 4.4 are

however obviously

the restrictions. Thus

mates

of Mi-

,

M->

>

if

/

m.i

m

v

.

,

may therefore ignore

,

m

are the least square esti-

Zii

=

x

m.,

M we have

™ = ;o rrii

and we

satisfied

ZX i

>

j

i

£

Zo-

rn

=

x,-.



X,

= ^

Yj

z.-i

- m=

x.,-

-

X.

=

45 Thus the

F

regression value

Ya =

(4.83)

We now

Xi.

given by

is

( ,

+ x., —

x.

apply theorem 4.2 and consider the sequence of

hypotheses

(4.84)

Hi

:

Mo

=

H

2

:

Hi

&

H

3

:

Hi &

=

M 0,

H & M-i 2

seen that the regression values are

It is easily

YtY

=

Y\f

=

as.,

3>

=

x.



*•/

(4.85)

F‘

Hence by theorem Qa

+

Mi-

Mi-

=

X

=

S

(*-

3

x)



r

»'

j

For testing

H

2

:

D

Q - Qa = r

*

,

Similarly for testing

Q -

(4.88)

r

^

(x.,-



3

x)



rsx

3 .

i

=

Mo

0 we have

-

(Fi)-’

F-

3

3> )

=

s

£

(as,.

-

3

x)

.

*

i

H

2

:

/z.,-

=

Q„

r

=

£

0 we obtain (x.,

-

3

x)

.

i

We

can further simplify

and applying theorem

p **

-

(r

-

(4.86), (4.87), (4.88)

4.1

l)(s

I

we

-

by means

of (3.1)

find that

1)

i

(4.89)

— —

rsx r

3

^2

x. {

+ rsx

46 and

r

#

(r

-

l)(s

-

1)

(4.90) r

— have both the

F

statistics for testing

The



distribution



— rsx — r yix!, +

2

2

x./

and are the likehhood

the hypotheses

degrees of freedom are r

E Z s

1

H

2

and

:

/*,.

(r



=

0,

1) (s

H'2



:

2

rsx

ratio

n.,-

1) for

= 0. F2

;

and (r — 1) (s — 1) for F2 Problem 1. Find the proper statistic to test H l2 n = n 2 and H[ 2 )u-i = M -2 in example 4. Hint; apply the corollary to theorem 4.3.

s

1

.

:

:

t

.

.

CHAPTER

V

Analysis of Variance in an r-way Classification Design

Let us again consider example 4

of chapter 4.

We

had

rs

• r; j = 1, • • • , s). The observations (i = 1, could be arranged in classes in two ways and Xu was the value observed in the z'th class of the first and in the jth class of the

quantities x.,





,

second classification. This idea can be generalized and we shall in this chapter consider r-way classification designs and their analysis for any r. For practical reasons r will be limited to at most 4 or 5;

however, a general treatment of r-way classification designs is just as easy as the treatment of special cases and we shall give it here in all generality. To give an example of a 3-way classification suppose that we have 10 weather stations. The mean rainfall was recorded 10 stations every month in 5 successive years. Every observation is then characterized by 3 numbers, the number of the weather station, the month, and the year in which the observation was made. Thus the observations may be denoted

by these

5), 12; a 3 = 1, 10; a 2 — 1, by xaia , a , (ai = 1, where a! is the number of the weather station, a 2 the month, and o3 the year of observation. We may for instance want to know whether rainfall was •







,







,





,

different in different locations or in different years. Differences

between different months are certain to be present. These simple questions do not however exhaust the information in which we might be interested. It is of interest to know also whether the combination of a certain location with a certain month has any bearing on the amount of rainfall, or whether rainfall was unusually large in July of a particular year. Accordingly

we

mean rainfall in one particular station durmonth and in one particular year as being month and year as well as of the the interaction of month and year, month and station, conceive the

ing one particular

made up effect of

of the effects of station,

47

r

48 year and station and finally one effect due to the interaction of month, year and station. Thus

F(X 0l0l0j )

&2

)

U3)

“f* ju(l,

2j

m( 2, 3; a 2

,

a 3)

+

3; a,

2, 3, d\

+ +

(5- 1 )

a*( 1;

,

ai)

+

u 3)



/x(l,

m( 2; a 2 )

+

CLi

,

Ct 2)

,

a 3)

+

m(3; a 3 )

#1 ,

where m(1j 2, 3; ai

a2

,

u3 )

,

01

3,

/*(!> 2,

cii

u2

,

>

^

“*

m(1j 2, 3;

1

)

/*(*1

;

I2

®>,

)

=

®i a )

)

®i,

»

“•>

= X)

For instance of station

d

t

u2

,

,

month number

5.

The assumption can

"f"

/i(l,

also be written as •Pataaaa

m(1> 2, 3, d\

+ +

(5- 2 )

d2

m(2, 3;

+

m( 2; a 2 )

d2

,

U 3)

,

+

d3 )

,

2j d\

m( 1, 3;

*

,

Uia),

• ,

(5.3)

* y

t

>

*

|

®»*)

=

0,

* *

*

)

ti)y

:

'

49 where the second summation a = 0 and 1 .-., n(u mation over all combinations

£

r

with

a it




kp

atl

;





,



,

o tJ )

(5.6)



*(*1

’ >





j

ia+i

a,,

;





,



,

a,-„ +1)

a

Z(-ir

T

7-0

x(b i

We o-b x

compute now • ,

• ,

in the last bi

fl6

T ).

sum by

.



»

' '

>

by

The term

a:(6i

• • ,

for every choice k x

Out

;

a bl





,



,

in 5.6 the coefficient of

of the

are for fixed 0 exactly

a

+

1

% ,

,

by

k2

;

a»,





,

numbers



d by ).

xfa •

*,

• ,

,

by

;

a tr ) occurs which contains





,

,

,

kp •

,





,

i a+1

there

*

51 such choices. Hence the coefficient of x(b x





,



,

by

ab

;

,

,

a by ) becomes

-

s

(

+

- 1) "’(“ = (-i )“ +i

r

-

(-i)'(“

r )

7 )

j

1 (-l)** -*.

= This proves

-7

r

+

5.5.

We show next that the solutions in 5.5 satisfy the restrictions = x(l; a — x. We have by 5.6 is clear for A(l; a

in 5.3. This

x

A(l,

=







x(l,

,

a





+



1; «i

+

a

,

(5.6a)

x)

)







,



1; at

a«+i)

,



,



,

a„ +1 )





a

E E



A(ki

,

,

kp

;



Oi, >







j

-0 !,•••, o+l

5.6a over a x and applying mathematical induction

Summing we obtain 52 A(l,

• •



x(2,

+

a

,







1; ai

+

a

,



1;

-EE



,



a a+ 1)

,

a2



a a+ 0

• ,

,

a

fl-0 2,

by

• •

,

a(*,

,

kp

;

a*.

a+1

ik now shows that the A(i x • a„) are the unique solutions of the minimum problem. form quadratic minimize the wish to Suppose we

The o



5.4.



( ,

following argument





;

,

,





,

,

Q — ) f

°i

) i

*

*

*

) “•«

x(il

1

• J

*

* /

ia

t

' '

toil

)

*

)

Qia)

L



^2

0-0 *!,•••,*«

m(^i

‘ >



i

®*i

' >

T

’ '

>



52 under the restrictions 5.3 and no further restrictions. The solution to 5.4 which as we have shown satisfy the restrictions are then1 the uniquely determined values for which Q' = 0. Hence if we would write out the least square equations including the terms resulting from the Lagrange operators we should

a k,

• >

We



A{k

solutions for the

kf

• k

,

,

;

ake) since Q' can take only one minimum value.

>

apply

now Theorem

H

— Hi &

If

same

get the

still



yf* -*

2

&





4.2 to our sequence of hypotheses.

&



,

y

j

*

* |

j

dik)

* ,



In

£

£o«..» ~

A fa

times.

n:i.... b

r

bi

each



£



,

•••

ik

;

,

^

i z

a-0

*

1,2,

*

,

=

• • •







lo

)



!

a i„)"

)

r

£

Q=

52

52

a

!**>*«»

#*(*

i

>

r

(5.9)

•••

•[A(ti

,

i

,

'

Mtfl

' '

»

)



fa

»





l

l

Besides testing hypotheses concerning sets of interactions m(»i

• • ,



i



;

«i





,



a„) for

,





all

,a a we



,

may

also wish

to test hypotheses which concern individual interactions. In

such cases certain sets of interactions n(i ia Oi aa) will be assumed to be equal to 0 for all a„ We shall refer to such interactions as interactions of type I. Other interactions n(il ax a>„ ) will be unknown ia for all di a„ These we shall call interactions of type II. In one such set of interactions, however, we may wish to test hypotheses concerning individual values of the set and we shall call those interactions of type III. Equation 5.9 shows that for finding Q a and Q, we must put n(i, a, a a) = 0 for interactions of type I and n(i t ia a a„) = ^(*1 »••»*'«» «i »**•



52 at



• ’

52

[AO'.

• • ,



,

jt

;

o.

• ,





,

,

at)

a*

-

2

mO’i

• • »

'

I

jk

;

a.

• • ,

*

,

a*)]

54 with respect to mO'i

'

ju(ii

'







j

"

)

,

at

jk

'

)

I

!

jk

I

• • ,

Ol



under the

a*)

,

)

>

>

restrictions

0

®t)

'

ffl«

I



and certain other

imposed by assumption and

restrictions

hypothesis.

As an example consider a three way classification and assume all 2nd order interactions are 0. We wish to test the all interactions between the first and second

that

hypothesis that

The assumption then

classification are 0.

m( 1>

2, 3;

tti

a2

,

(® i

=

The hypothesis

=

is /z(l,

2;

a

x

=

a, 3

,

lj

=

a2 )

>

=

(a t

0,

£3 )



1,

... < 2 ). The number of linear restrictions h a 2 = 1, imposed by the hypothesis is (ty — 1)((ii

-

2

iiX

.

at

Suppose that 5.11 *

* *

°t‘i

is

23

[-^(^1

*

*

7

*

k.

ik

1

)

From

5.10

&

*

*

ix

^»*)]

*

7

we have

7

°t* *

*

3E]

(5.12)




r

1.

and consider the Galois

The order

Every element of this Galois field + akx k k < s where a0 a, •





a divisor of x

of this Galois field

of the

is

• • ,

form a0

is

a,x

+

mod

p.

+

a k are residues

100

+ax +

(a 0

(8.21)





x



+ a xY k

=

=

since x*’

x(J(x), p)

a0

+

bound for 4he order in our Galois Theorem 8.3 since s > r.

Lemma p

8.4:

)

x \9 2



'

>

ffi

>

j

x

g-i

02

)

)•

The system thus constructed the element

(0,

1,



• •

,

1)

is not a field since, for instance, has no inverse in multiplication.

However, the postulates I-IV for addition and I-III for multiplication and postulate V are fulfilled. All the “points” which have no 0 among their coordinates possess inverses. Let o,

=

i,

fl4°,

be the marks of G.F.(p”‘), then

•••

if

,

r

g$“'



)

k

+

K

v lv i

contained in P.G.(m, p“)

of P.G.(fc, p")

becomes rq 4

+D- n

v

)

(P

p

+

m>

n )

,n

p

,n

)p

>

s

0.

contained in

is

+

+ pQ

(p"+

)



o/ a P.G.(m,

p")

B. Every line





(p

P.G.(s,

)’ s

contained in

is

n

r

p

one obtains in particular





2

(p

P.G.(s,

p

+ Pm •••(?!»+•+ + (p”) + p«) /or m>s> ")









n )





p-)





1.

Every P.G.(s, pn) contains with every pair of points also the whole line joining them. Thus every pair of points is contained in X different P.G.(s, p").

We may now (s,

p

identify the points with varieties and the P.G. with blocks. Then we have the following theorem.

)

n

Theorem 9.1: The P.G.(s, p ) contained in a P.G.(w, p") form a balanced incomplete block design with the parameters (9.6)

h

-

(1

+ P" + (1 +

'





'

=

b(s,

v

=

1

+ p* +





k



1

+ p" +





+



+



^ ,n

p

'

+ +

fa'"

(p'- 1 ’"

)

n

m, p

),





+ p mn = +

p’

n

=

v{m, p

n ),

k(s, p”),







+ pQ

p’^p’"

.

111

+

(p" r

n

)

+

(p

=

+ pmn + p”)

-

(P’





+ Pml



- 1)B + P*> (p (



• •

r(s, to, p"),

=

if s

1

2

(p

+ +

"











1

=

.

+ pm + p“) ")

2n

(p

We

+

n •





,n •





m,pn)

X(s,

+

(P









(p'" 1,n



>

s

if



+

m

+

P l n

P'")p'

1

next consider the points in P.G. (to, p") common to a - 1, p”) and a given P.G.(s, p") not contained which is not contained x

given P.G. (to

Let p be a point in the P.G.(s, p”) n in the P.G. (to - 1, p ). Let q pendent points in the P.G.(m — 1, p”). 1 linearly independent points are

in

it.



x



,

,

m +

of

P.G.(m, p")

of the

is

XiPi

Now

let Pi

,



,

m

linearly inde-

Then

pi

,





,



,

qm

form

+ •

p2

qm be

and hence every point

^2