Modern Computing Methods [2 ed.]

421 60 10MB

English Pages [176] Year 176

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Modern Computing Methods [2 ed.]

Table of contents :
Title Page
Copyright
Table of Contents
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Chapter 14
Chapter 15
Bibliography
Index

Citation preview

MODERN

COMPUTING METHODS SECOND

EDITION

PHILOSOPHICAL

LIBRARY

NEW YORK

Engin. Library

Published

1961

by Philosophical Library, Inc. 15 East 40th Street, New York 16, X.Y.

All

by

rights

reserved

Printed in England for Philosophical Library Press, Bristol 1

***

***

**

8,

22

oss

os,

Bs

2s

77134

§, 2.

0.44

The first multiplier

from the equation

mia is obtained

= 0,

ºn 12011-Fazi

and the coefficients of the new (ii) are obtained from equations typified by o:24

=

??12014

+ a 24.

The second column of multipliers is obtained from the equations ºn 180.11+ aal

-

0,

nusa.12+ m2a02a+ asz = 0,

and the coefficients of the new (iii) from equations typified by 8s = misbi + m2a524-ba.

Finally, the last column of multipliers is obtained from the equations 77140.11 + 241

= 0,

*14012 + m240.22+daz

= 0,

+ m34oss-Haas = 0,

ºn 14218-H m240-2s

-

and the coefficients of the new (iv) from equations typified by o:44

= m14014+

m24024 + m340-34 + a 44

The final equations from which back-substitution is performed are not the same as the pivotal equations of the previous method, unless the chosen pivot at each stage in the latter is the first element in its column. In this event the two sets of equations are the same, except for rounding

errorS.

lie

of

be

position.

N

O

G

EP EN DIN

c,

D

METHOD

S

of

of

In

symmetric matrices labour may the case the relation mg the m's from the o’s by means

MATRIX

= be

tion

of

to

a

is

of

in in

The saving in recording time and space is clear. The arrangement is also satisfactory in that quantities to be multiplied together the same Crout, described [16], has the computing sheet. The method row still more compact lay-out but less proof against error, since numbers multiplied together do not have this favourable combina which are saved by computing

-oglott.

PRO

PERTIES

8

is

of

a

of

L

is

A

U

is

in

of

coefficients (20) denoted by and called upper 10. The matrix triangular, since all its elements below the main diagonal are zero. matrix with zero elements above this diagonal labelled and called lower triangular. Triangular matrices are obviously more convenient than com plete matrices for solving linear equations: the determinant such matrix, moreover, just the product the diagonal terms.

With the Gaussian elimination method, the original equations (1), for which the matrix is complete, were transformed into equations (20), for which the matrix is upper triangular. The elimination is effectively equiva lent to multiplying the original matrix A by a lower triangle L, producing an upper triangle U ; thus LA = U. (22) The equations from which the solutions are obtained by back-substitution are then

LAx

Ux

=

=

Lb.

(23)

The matrix L here has ones in its diagonal; hence from (22) and the fact that the determinant of a matrix product is the product of the separate determinants [13], we see that the determinant of A is the same as that of U, and equal to the product of the diagonal terms of U. (If the pivots do not all lie on the diagonal, the sign of the determinant may be changed.) Another class of method, the best for desk machines, uses the fact that a square matrix can be expressed as the product of two triangles, in the form

A

=

LU,

(24)

provided that it has non-zero leading principal minors, that is, the deter minant composed of elements common to the first r rows and first r columns of A is non-zero for every r = 1 , 2, . . ., n 1 . The diagonal terms of either L or U can here be chosen arbitrarily, the rest then being deter mined uniquely. If the matrix is symmetric, the diagonal of U is best taken to be the same as that of L ; then U is the transpose of L, so that only one triangle has to be determined from the equation

-

A

=

LL',

though some elements will be imaginary (Chapter

(25)

if A

is not positive definite

3, § 3).

When multiplying two matrices on desk machines, it is best to the transpose of the right-hand matrix vertically beneath the left-hand matrix, so that the rule (10) for multiplication can be written as 11.

record

AB

=

MAKtB')

r1(A)r8(B')

r2(AK(B')

r2(A)r2(B')

and elements of rows are multiplied together, corresponding lying in the same column. In particular, if B is A' we have

AA'

=

K(A)}*

r1(A)r2(A)

r1(A)r2(A)

{r2(A)}°

elements

(26)

The determination of L and U in general is sufficiently illustrated by of a matrix of order three. The notation and arrangement

consideration

9

are as follows, the triangle L being here taken with unit diagonal elements, and the transpose of the upper triangle U being the lower triangle U'.

A 012

013

022

023

ba

22

032

0.33

ba

2s

X

81

21

L

1

las

1

lai lel tl11

1/22 23

aga

=

=

una,

L;

of

ra(U’)

leiuis-Fuss,

lawig-Flagues,

giving

giving the

U'.

be

giving the last element

recorded, the diagonal and need not and las, and we have the equations

L les

ri(L)

in

=

=

was,

lillai,

a18

l;1+lia,

ass

lillai, leilal-Fleslag,

là-Fläs-Fläs,

of

aga

= = =

+

U" is 12

lit,

ra(L) ra(U’)

ra(L) ra(U’)

lazues+

= =

lil,

a

L

=

wis, ais

=

of

lai

L;

of

=

=

of

In

the symmetric case are denoted by a11

ra(U’)

ass

+

=

= =

lawia

ass

terms

follows. The multiplication

leiſuu, giving the second row

wu, ass ra(L) ri(U') third row and finally

ra(L) ra(U’)

are

=

=

of

= =

= =

riſL) U';

use, ra(L) ra(U’) leiwis U'; the second column

aa1

22 2:3

as

calculation

au r(L) ri(U') uni, ais giving the first column

age

82 83

Ss ya

ya tº

of

Si

S

The method and order rule gives =

1/33

S,

91

y'

1013

=

tl12

ri(U’)

21

0.21

U’

ra(L)

bi

031

l

agi

>

S

011

b

y,

(28), and for 10

b,

=

Ly

(27)

x

from the last

LUx

=

Ax

of

solving for

y

we can write

=

Ux

=

of

is

or

the lºs. for the successive determination decomposition finished, we can 12. When this triangular resolution back-substitution. solve the linear equations (3) by two processes Introducing the auxiliary vectory, defined by

from (27).

(28)

If

they The elements of y are obtained in the same way as those of U'. are written in transposed form as a row vector with components yi, ye, ys, shown in position in the arrangement of § 11, then the equation Ly = b gives laiyi + y2 = be, laiyi +lasya-Fya = ba, 91 = bi, from which the y, are obtained in succession. As a check on the work we form the sum column 2, composed of the row sums of A and b, and a sum row S, composed of the column sums of U' and y'. As each of the latter becomes available we use the successive relations

x

We finally calculate

22,

ra(L) S = Xs,

Si-Flag Sa-FSa

=

Si-H

lai

S1 = 21,

r2(L) S = 22, Sa =

Or

lei

r1(L) S = 21,

2a:

from (27) from equations typified by

y,

r as

=

c,(U')x

(29)

of

=

ca

is

s

If

1

=

+

is

on

=

+

a

of

=

832.3

?/1+3/2

3/3.

og

as

in

of

is be

multipliers may noticed that the array the Doolittle L', with the signs changed and the unit the same

9)

should (§

It

81.21+ 822.2

method

in

= 3, 2,

x

of

shown, and calculated from the successive The elements are recorded (29), and given by was equations obtained by taking ya, yj. y2, wisza-Fuisa's 4-u1121 wesas-Fussa's the column formed U", given by the row sums suitable check this back-substitution

in

U.

= y,

Ly

(30)

= b,

L'x LL'x

=

Ax

=

is

If

in

is

as

of

diagonal terms omitted, and that the array coefficients a1, and Doolittle's resulting equations the same Thus the computations are precisely the same the two methods; only the arrangement differs. symmetric, equations (27) and (28) are replaced by the matrix

L.

to

is

in

L

to

(31), L-1

{(L')-1}',

case),

(31)

case).

that only one triangle has

to

(L')-1 L-1 (symmetric so

U-1 L-1 (unsymmetric

=

the second be inverted.

of

In

A−1

=

A−1

=

is

is

to

of

of

to

is

of

s

S

L

=

of U'

so

arrangement and the only change the complete U", being attached and omission the sums given on 13. The solution by this method the previous example $8 the next page. The fact that the answers agree with those barely four decimals due the ill-conditioning the equations noted previously. following the 14. For matrix inversion there are various possibilities triangular resolution. One method invert both and U, and then find A-1 from

that

in

in

of

triangles and the final multiplication The arrangement for inversion [16]; still more compact methods are described [I7].

are described 2

nts, r

11

These compact methods,

usually preferred.

in which at most one triangle is inverted, are

A

b

>

0.2943

0-4043

1.59940,

0-4015

0.1129

0-1550

1.281.20(19)

0-3728

0.0643

0-4240

1.4.1760(59)

0-2786

0.3927

0-2557

0.99420,

0.4096

0-1234

0.3678

0.2246

0.3872

0-3645

0-1920

0- 1784

0.4002



1

0. 54834

I

0.88989

0.25721

0.43555

1-08426

l 16-65290

U’

0.40960

0-12340

0-31953

0-36780

0-19982

0.29430

— 0-04848

y’ 0.40430 S 1.59940

— 0-06669

0.40418

x

S

0.40960

0-44293

– 0.00590 – 0-1851.3 O-08137

– 0-10966

– 0-00609

– 1-55560

0.56172 3-40003

– 1-71453 1.685.50

12

3.46072

(–1-29555V)

2-03144 —

0-50427

LINEAR EQUATIONS AND MATRICES:

DIRECT METHODS

ON AUTOMATIC

COMPUTERS

INTRODUCTION 1. When linear equations are solved on a desk machine, the minimiza tion of the number of quantities which have to be written down and the convenience of the layout are of paramount importance. The time taken to write a number is comparable with that taken to perform an arith metical operation, and a high percentage of the total number of mistakes occurs in the writing stage. Although properly applied checks give com plete protection against undetected errors, the solution of a system of

equations of quite moderate order is tedious unless mistakes are infrequent. In contrast, an automatic computer in good working order may be relied upon to solve a very large system of equations without making any mistakes. The problems of layout are still present, though in a rather different form. They are now mainly concerned with the use of an auxiliary store having an access time different from that of the high speed store. For instance, it is often of decisive importance whether matrices are held in the auxiliary store in rows or in columns. 2. In other respects, an automatic computer is less flexible than a desk machine. For example, the method of triangular decomposition described in §§ 10-14 of the previous chapter is superior to the elimination method only if the scalar products are accumulated without rounding each contribution. It is therefore essentially a fixed-point method, but an experienced hand computer will add extra figures when the need arises. In practice this need is likely to arise quite frequently, because no pro vision has been made for the selection of pivots comparable with that described in Gaussian elimination. In consequence, a diagonal element of the upper triangular matrix U may sometimes be quite small, with the result that some of the elements of L and U are considerably larger than those of the matrix of the original equations. Unsystematic modifications of this type are unsatisfactory in auto matic work. In general, in order to achieve the maximum accuracy in the solution we try to design programmes in such a way that all numbers fully occupy the storage registers (perhaps it should be added that most automatic computers do not work any faster with numbers which are less than a full word length). Accordingly, in this chapter we shall analyse the computing procedures in more detail. 13

3. The precise details of the arithmetical facilities of the computer are of great importance. On many machines both fixed -point and floating point facilities are available. floating-point operations are used through out, then there is apparently no need to give detailed attention to the size of numbers arising during the course of the computation. It is shown in Chapter 5, however, that this procedure does not remove the necessity for interchanges in Gaussian elimination and related methods. Moreover, for a given word length the precision of floating-point arithmetic is lower than that attainable with fixed-point arithmetic because some digits have to be allocated to the exponent. In most matrix problems it is possible to carry out the necessary scaling in fixed-point operations using fewer digits than would be required for the floating-point exponents. on computers 4. Fixed-point operation is particularly advantageous which are equipped with the following two facilities. The ability to accumulate scalar products, Sa^, exactly. This possessed by all desk machines; the exact scalar product facility usually produced whether required or not. On automatic computers the decisive feature whether or not double precision in the product may be obtained without special programming. The main advantage of this made, this occurring when the that only one rounding error facility rounded to single precision. accumulated sum (ii) The ability to divide a double-precision number by a single-precision number. When this facility provided in conjunction with the facility (i), introduced in the computation of quantities only one rounding error of the form (Ec^fc^/c. Both facilities are available on the computers in use at N.P.L., and their provision elsewhere becoming increasingly common.

is

is

is

is

is

is

is

is

is

(i)

If

ELIMINATION

GAUSSIAN

+

a

r

r)

it

is

of

5. Elimination with selection pivots may be carried out quite satis factorily with fixed-point arithmetic. Storage presents no special problems because the successive reduced equations may be overwritten in the locations occupied by the original matrix. It usually desirable to design the programme so that will deal simultaneously with any number, total of n(n say, of right-hand sides. For a system of order n,

is

is

then required. storage locations used, all the multipliers are bounded by 6. Since pivotal selection the unity and may therefore be computed without scale factors. original equations are scaled so that all coefficients and constant terms are numerically less than then extremely uncommon for an say, element of any of the reduced equations to exceed unity. desirable to include some pro In a general-purpose programme cedure which deals automatically with the danger of 'overspill'. The all the elements based on the observation that following method then no element of of a given reduced set of equations are less than the next reduced set can exceed unity. As each reduced equation then that equa formed, its elements are tested for size. any exceeds In this way the maximum precision divided throughout by tion preserved and the possible need to repeat the computations as a result of overspill avoided. is

is

2.

is

is

\,

If

is

\,

is

if

is

it

it

\,

If

14

7. In the programmes written for the N.P.L. computers, the elements of the rth pivotal row are interchanged with those of the rth row at each stage of the reduction so that the final triangular set of equations is stored in the natural order. For this reason elimination with selection of pivots is often referred to as elimination with interchanges. On other machines it is sometimes more convenient to leave each row in its original position and to store the locations of the pivotal rows. The location of the pivotal equation for the (r+l)th reduction may be determined during the course of the rth reduction ; we merely have to keep a record of the size and location of the numerically largest coefficient of xr+1 occurring in the reduced equations up to and including the current stage.

THE BACK-SUBSTITTTTION 8. In the reduction to triangular form, the need for scaling is almost non-existent. In the back-substitution, however, scaling is the major pre occupation in fixed-point work, since the scaling of the original equations places no restriction whatever on the size of the solutions. A convenient scheme is the following. The rth pivotal equation has the form

0, which more than twice as great. we iterate with -2, -3, we tend to xg at speed (A 41), whose roots are determined by the rate at which (2/3)* -»-0. 7. The process may be used to give convergence either to the largest or to the smallest root; cannot be used to determine the intermediate roots. These can be found by a process of successive root-removal des cribed later in 10-13. Sometimes, however, the root nearest to given value required. To find this we may make use of a device similar to a

-

If

If

a

0,

-

is

a

§§

it

1,

2,

is

-

1,

If

equation

by x,

by

A,

(28), with xi replaced

Xi

6.

§

that of From equation

we obtain the

(A-pI)-1x=(\-p)-1x.

(29)

p

it

A-

-

{A-pI)xi+1 yi+1 = zi+1

-

is

Xi

^ is

This shows that the latent vectors of (A- pi)*1 are the same as those of A, but the latent root corresponding to (Ai p)*1. The dominant root of (A- pi)-1 will correspond to the root of A nearest to because this will give the greatest value of (A- p)-1. It unnecessary to compute the inverse of pi explicitly; indeed, would be uneconomical to do so in general. We perform Gaussian elimination, or triangular decomposi tion with interchanges, on the matrix A pi. Iteration may then be carried out by using the relations

-=-(numerically

= yi,

largest element of zi+1),

(30) (31)

25

is

p

is

§§

A- pi

2,

in Chapter 9-11. If close to a latent root, the matrix will be ill-conditioned. Nevertheless, the accuracy of the latent vectors determined in this way unaffected; see [32].

as described

8. A second iterative method for finding the latent root of largest modulus and its corresponding vector is that of matrix powering. the sequence A, A2, A4, A8, ... is formed, then all the columns of A2* become parallel to the dominant latent vector. This can be seen if we express A in terms of its columns in the form

If

A Then

= [a1; a2, . . . , anJ.

A2

=AA

A*

= A2A2 =

A2* =

= [Aa1,Aa2

Aa„],

1

[A'%, A'a2) ..., A'aJ, I

[A^a^A^a,,...^2*-^^.

(32)

J

The proofs of § 5 show that each of the columns in parentheses is ulti mately parallel to the dominant latent vector. The speed of convergence is given by the rate at which (XJX^-^O. This gives more rapid con vergence than the previous iterative method, but each iteration requires n3 multiplications instead of nz. It is readily seen that for a given ratio of A2 and Xv matrix powering is the more efficient method only for matrices of low order. The method has the further disadvantage that if A contains a number of zero elements, then these zeros do not persist in the matrix powers.

METHODS

FOE FINDING A STJBDOMINANT

ROOT

9. The iterative methods described are suitable for finding the greatest or smallest root of a matrix. To find a subdominant root by these methods the dominant root must first be removed from the matrix. For a symmetric matrix the simplest method is the following. Suppose we have the root A1 and corresponding vector x1, normalized so that = 1 , of a matrix A. Then the matrix A1, defined by X1 X1

A1 =

A-A1x1xi,

(33)

has the same latent roots and vectors as A except that the root corre sponding to X1 has become zero. For we have A.1x1

Also,

if A2

=

Ax1

-

X1x1x'1x1

=

X^!

A1x1

= 0.

and x2 are another root and vector of A, then A1x2 = AXj-A1X1X1Xj

since x^x2 = 0 by the symmetric matrix. 10.

-

For unsymmetric

= A2x2,

orthogonality relation for latent vectors of a matrices the following is probably the simplest A1 and vector xv The matrix A is

of removing a known root written in partitioned form as

method

A-

... B 26

.

(34)

where p[ is the first row of A. The known vector x1 is normalized so that its first component is unity. The matrix A1 is then computed from the

reIation

A1 =

If A2 and

A-xlPi.

(35)

x2 are latent roots and vectors of A, x2 being normalized so that its first component is unity, then x1 x2 is a latent vector of Alf with

latent root

A,.

-

For

A1(x1-X2)

=

A(x1-x2)-x1pi(x1-x2)

=

A1x1-

l2A2'

xi(A1-A2) (36)

Hence the latent roots of A1 are the same as those of A, except that the root A1 has become zero, since A1x1 = Ax1 x1p^x1 = A1x1 x1A1 = 0. The latent vectors are simply related to those of A. It is easily seen that the first row of A1 is zero throughout and that, since each of the latent vectors X1 Xi of A1 has a zero component in its first position, we need work only with the matrix of order n 1 in the bottom right-hand corner of A1. Hence the order of the relevant matrix is reduced by unity with each root we find. For the computation of the required vector, we first find a latent vector y2 of the (n— 1) x (n- 1) matrix obtained from A1, and extend it to order n by giving it a zero first component. Then the latent vector x2 of A, corresponding to y2, is given by

-

-

-

-

x2

y2.

x1 +

iy2,

(37)

The factor k is necessary because of a Multiplying (37) by pi, we have

where we use the extended

normalizing factor in

=

y2.

A2=A1 + Apiy2, and from this we obtain k. Equation

(38)

(37) then provides

the required

vector x2. In this description it was assumed that x1 had been normalized so that its first element is unity. From the point of view of numerical conveni ence it is better to normalize so that the largest element is unity. The analysis is unaltered, but since we then use the row of A in the position corresponding to this largest element instead of the first row, the notation is not so convenient. 11. A simple example will illustrate the method of root-removal. A root and vector of the matrix 2

3

2~

0

3

4

3

6

1

Normalizing x

so

1

are respectively

A = — 2 and

x =

2

-5

that its largest component is

-0-2' x

=

0-4 1-0 27

1,

we obtain

The last row of A is therefore used in the root-removal process and we find

A1 =

2-6

4-2

2-2'

11-2

5-4

4-4

0

0

0

It is

clear that the latent vectors of A1 all have zero for their last com ponent and therefore we need only work with vectors of order 2 and the

matrix

of order

2.

A latent

2-6

4-2

11-2

5-4

vector of this matrix is the vector

-

3. corresponding latent root equal to We can then remove this vector of order

2

[3, ™with

the

from the matrix of order

To do this we write the vector in the form

,

2.

and use the last row

in the root-removal process. This gives the matrix 8-25

11

0

0

It

is clear that the two latent vectors of this matrix have zero in the last position and that we need only work with the single element 11. The last latent root is therefore 11. We have thus obtained 3 latent vectors, "

-0-4

and

-0-75

-0-2" ,

[1-0],

1-00.

1-0

of which only the first is a latent vector of the original matrix A. 12. For a matrix of order n we would have found n vectors of the form: one vector u1 of order n (a true latent vector of the matrix A), one vector u2 of order 1,

n-

one vector u3 of order n one vector un of order

-

2,

1.

-

To obtain from the vector of order n r a true latent vector of the original matrix A we would proceed in the following r steps. A vector of order n r+1 would be computed, using the vector ur of order n r+1, from relations of types (37) and (38). From this vector one of order n r + 2 would be computed, using these relations again with the latent vector u,^ of order n r + 2, and so on, until we obtained a true latent vector of order n.

-

-

-

28

-

vector

In

the above example

x, from

the relation 0-2





– 0.4

x2 = |

".

|

the vector

leads to a true latent

0-75

| +k |

1.00

0-00

1],

1-0

A,

13.

of

found —

The third vector may -

l –#

|

=

in

be

x2

O

k =



so



=

3



Multiplying this by [3, 6, the last row we find, from (38), the 2+3.75k, #, and then result that

two similar steps, given below:

0-75

1.00

1.00

0.00

|

||

+ k

0.4

k =

so



|.

ya

##,

and

:

x3

that

=

6,

1]

1.0

0-0

gives

24-7.5k,

=

11

e

1.00

0-5

1-0

Multiplication by [3,

=

1.25, and

0-2



|

=

xa



(ii)

=

that

k

11.2k,

so

3 +



=

11

Multiplication by [11-2, 5.4] gives

are the same 29

those

For (39)

TAT’—AI.

is

T

is,

if

that

of A.

as

those

of

AI

TT

TAT’—AI, as

|A



that the zeros

of

so

T(A–AI)

Tº =

of

if

15. This method depends upon the fact that orthogonal, then the roots TAT' are the same

I,

BI =

TFIE METHOD OF JAC

O

in

of

a

of

in

in

of

14. We have described these iterative methods in some detail because they are simple and, besides being value their own right, they are frequently used conjunction with other methods. For symmetric number more powerful methods which would, matrices there are most circumstances, be used instead. We now describe some these.

A simple orthogonal other i,j except that

matrix

Tpp = cos6,

If

Tqq

T is

given by Tu = Tpq = Bm6,

= cos6,

\(i+p,q), 2Tap

=

Ti}

= 0

-sinfl.

for all (40)

we form TAT', only the p and q rows and the p and q columns of A are altered, and the matrix remains symmetric. The (p, q) element of TAT' is given by apqcos29 \{app aqq)sm.2d, from which it follows that if

-

-

tun29

= 2apq/(app-aqg),

(41)

then the (p,q) element of TAT' is zero. It is easy to prove that both the sum of the diagonal elements (the trace) and the sum of the squares of the off-diagonal elements other than the (p, q) and (q,p) elements are unchanged. we choose a succession of T matrices and successively premultiply by T and postmultiply by T', each T matrix being chosen to make zero the largest off-diagonal elements in the resulting matrix at that stage, then we ultimately obtain a diagonal matrix. We have

If

TrTr-1...T1ATiT;...T;

=

D,

(42)

where D is a diagonal matrix. Since the product of orthogonal matrices is itself orthogonal, the elements of D are the latent roots of A. the latent vectors of A are wanted they are given by the columns of the

If

product matrix

r1r,...rr-8.

(43)

S'AS = D,

(44)

This follows from (42), since

AS

giving

=

(S')-1D

=

SD.

(45)

GIVBKS, METHOD 16. Each step of the reduction in Jacobi's method, consisting of premultiplication by the matrix T, and postmultiplication by its transpose T', where the non-zero elements of T are given by (40), is called a rotation. The pair of values of p, q is called the plane of the rotation, and 6 the angle of the rotation. The method of Givens is similar to that of Jacobi, inasmuch as each step of it consists of a rotation. In this case, however, we choose 6 so that the element in the (p 1, q) position (p > 1) becomes zero. This gives

-

t&nd = ap-1,q/ap-1rp.

(46)

The rotations are applied systematically to the matrix to make zero, in order, the following elements : Positions of zero elements Planes of rotation

(l,3),(l,4),(l,5),...,(l,»)

lstrow 2nd row

(n

- 2)th

(2, 4), (2, 5),

row

...,(2, n) (n

-

2, n)

30

(2, 3), (2, 4), (2,5), (3, 4), (3, 5),

...,(2, n) ...,(3, n) (»— 1, n)

In

contrast to the Jacobi rotations, each zero element, once produced, persists throughout the subsequent transformations. The symmetry of the matrix is preserved, so that after carrying out the above \(n 1) (n 2) rotations, all elements are zero, other than those in the principal diagonal and the immediately adjacent diagonals, one on either side. The matrix is then said to be of triple-diagonal form, or sometimes tri-diagonal or co-diagonal form. As the work progresses the amount of computation in a rotation becomes steadily less. At the stage when zeros are introduced in the rth row, we are effectively working with a matrix of order n r+1 since the first r 1 rows and columns remain unaltered. Because of this and the non-iterative nature of the computation, the reduction to triple-diagonal form takes about one-twentieth of the time for the reduction to diagonal form by Jacobi's method [216]. 17. The latent roots of the symmetric triple-diagonal form are the same as those of the original matrix, and we now consider their evaluation. Since other methods lead to triple-diagonal matrices and also many latent root problems give rise to matrices which are already in this form, the solution of this problem is important quite apart from its present context.

-

-

-

-

DETERMINATION OF THE LATENT ROOTS AND VECTORS OP A SYMMETRIC TRIPLE-DIAGONAL MATRIX The method we describe, sometimes called the method of bisections, much slower than alternative methods in existence, but it is comparatively simple, and has such remarkable numerical stability that it is frequently used. It depends on the following result [30, 31]. Let pr(X) be the value of the rth leading principal minor of C AI, where C is a symmetric triple-diagonal matrix, and p0{X) = 1. Then the number, s(X), of agreements in sign between consecutive members of the sequence PoW,P1W,P2W, -..,PnW *s equal to the number of latent roots of C which are greater than A. (Note that s(X) is an integer between 0 and n inclusive.) Let the non-zero elements of C be denoted by 18.

is often

r-1 = fl»

(47

)

dr-1, r =

dr,r = is halved by each iteration after the first, so that five-decimal accuracy will be achieved after seventeen cycles.

GAUSS-SEIDEL

OR

LIEBMANN ITERATION

6. For the equations (3), the elements succession from the equations

of

xq§

---0"

a-T

---I-0-

Of

(siq?

--00-0

soi

0----0 0S-0S0 £----0-

00

-2260" -2A-20-

-2I-0

-00-00 00000-0

(8-0

--0-0

SfOf-0 ---200---0

o

-f-fS-0 --0--

-06-0

--S0

-I0-00 --S-0

-266SIT-0

Sf-00

0

----0 -0-0--0

S1-S-0 ----0

0

200-00 2--S-0

fSZI-0

.0-

0

-AT-0 -2-0

-60P-0

mq

0

-----00

(T-0

0 000

0000

0000

If A