The development of a performance rating scale for the evaluation of shipboard performance of enlisted naval personnel

383 67 5MB

English Pages 159

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The development of a performance rating scale for the evaluation of shipboard performance of enlisted naval personnel

Citation preview

THE DEVELOPMENT OF A PERFORMANCE RATING SCALE FOR THE EVALUATION OF SHIPBOARD PERFORMANCE OF ENLISTED NAVAL PERSONNEL

A Dissertation Presented to the Faculty of the Department of Psychology The University of Southern California

In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

by Robert Ramsay Mackie August 1950

UMI Number: DP30406

All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion.

Oissertatior Publ sh*ng

UMI DP30406 Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code

ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 4 8 10 6 - 1346

p u . o.

Ps

/'/' /o '-

F

T h is d is s e rta tio n , w r itte n by

...............E.Qt>.e.r.t.„Raxns.ay..jyia-C-ki.e................... u n d e r the g u id a n c e o f h .ls ... F a c u lty C o m m itte e on S tu d ie s, a n d a p p ro v e d by a l l its m em b ers, has been p re s e n te d to a n d acce p te d by th e C o u n c il on G ra d u a te S tu d y a n d R e search, in p a r t ia l f u l ­ f illm e n t o f re q u ire m e n ts f o r th e degree o f DOCTOR

OF

P H IL O S O P H Y

Committee on Studies

Chawman

ACKNOWLEDGMENT This study was made possible through a grant from the Office of Naval Research.

It was a part of a

more general project directed by Dr. Clark L. Wil­ son, and designed to develop shipboard performance criteria.

The writer is indebted to Dr. Wilson for

guidance throughout the study.

TABLE OF CONTENTS CHAPTER I.

PAGE

THE PROBLEM AND DEFINITION OF T E R M S ..........

1

Introduction . . .. ........................

1

The p r o b l e m ................................

2

Statement of

the problem .................

2

........

3

Importance of the study Definition of terms u s e d ..........

5

Pay grade or r a t e .......................

5

Reliability

..............................

5

Objectivity

..............................

5

STEN-score ................................

6

Organization of remainder of the disserta­

II.

tion ......................................

6

REVIEW OF THE L I T E R A T U R E .....................

8

Graphic devices

.........................

10

Ranking and comparison devices ..........

13

Check-list devices .......................

18

The critical-incident technique

........

19

.................

21

...............

23

Forced-choice technique Comparisons of techniques III.

DESCRIPTION OF THE RATING SCALE AND THE NATURE OF THE S A M P L E S ..............................

31

The f o r m a t ................................

31

Selection of the t r a i t s .................

31

iii CHAPTER

PAGE Wording of the traits

..........

33

The s a m p l e s ..............................

34

The raters

..............................

34

Instructions to the r a t e r s ...............

35

Scoring of the s c a l e s ...................

36

IV.

MEANS AND DISPERSIONS OF R A T I N G S .............

37

V.

INTERCORRELATIONS OF TRAITS AND RESULTS OF THE FACTOR A N A L Y S E S ..............

4l

Intercorrelation of traits, Procedure I

.

42

Intercorrelation of traits, Procedure II (variance within pay grade)

..........

48

Intercorrelation of traits, Procedure III (pay grade partialled o u t ) .............

VI. VII.

54

Results of the factor analyses ..........

60

Identification of factors

...............

69

. . '.................

77

AGREEMENT OF THE RATERS

THE RELIABILITY OF THE RATING SCALE AND SOME INDICATIONS OF V A L I D I T Y ...................

83

Reliability' of the scale

83

........

Relationship to other measures of perform­ ance ....................................... VIII.

84

SUMMARY AND C O N C L U S I O N S .....................

88

S u m m a r y ....................................

88

C o n c l u s i o n s ................................

95

iv CHAPTER

PAGE

SELECTED B I B L I O G R A P H Y ................................ APPENDIX A.

SAMPLE OF RATING SCALES RF 101 AND

RF 1 0 5 ......................................... APPENDIX B.

100

105

SAMPLE PROCEDURE FOR REDUCING THE

EFFECTS

OF PAY GRADE V A R I A N C E ...............

145

LIST OF TABLES TABLE I.

PAGE Mean and Middle 80 Per Cent Range of Rat­ ings, Mean and Middle 80 Per Cent Range of Standard Deviations for Each Trait . . .

II.

38

Intercorrelations of Rating Scale Traits, Ability Check List, Navy G.C.T. and Bio­ graphical Information Including Pay Grade (First Sample)

III.

. .

.....................

43

Intercorrelations of Rating Scale Traits, Navy G.C.T. and Biographical Information Including Pay Grade (SecondSample)

IV.

. . . .

45

Intercorrelations of Rating Scale Traits, Ability Check List, Navy G.C.T. and Bio­ graphical Information Based on Variance Within Pay Grade (FirstSample)

V.

............

49

Intercorrelations of Rating Scale Traits, Navy G.C.T. and Biographical Information -Based on Variance Within Pay Grade (Second S a m p l e ) ....................................

VI.

51

Intercorrelations of Rating Scale Traits, Navy G.C.T. and Biographical Information, Pay Grade Partialled Out Statistically (First Sample)

...........................

55

vi TABLE VII.

PAGE Intercorrelations of Rating Scale Traits, Navy G.C.T. and Biographical Information, Pay Grade Partialled out Statistically (Second Sample)

VIII.

...........................

57

Factor Loadings After Rotation and Communalities Before and After Rotation (First Sample, N = 187> Pay Grade Acting as a V a r i a b l e ) ..................................

IX.

63

Factor Loadings After Rotation and Communalities Before and After Rotation (First Sample, N = l87> Based on Variance Within Pay Grade)..................................

X.

XI.

64

Factor Loadings After Rotation and Communalities Before

andAfter Rotation

(First

Sample, N =

l87> PayGradePartialled Out).

65

Factor Loadings After Rotation and Communalities Before and After Rotation (Second Sample, N = 286, Pay Grade Acting as a V a r i a b l e ) ..................................

XII.

66

Factor Loadings After Rotation and Communalities Before and After Rotation (Second Sample, N = 286, Based on Variance Within Pay Grade)..... .............................

67

Vll PAGE

TABLE XIII.

Factor Loadings After Rotation and Communalities Before and After Rotation (Second Sample* N = 286, Pay Grade Partialled Out).

XIV.

68

Inter-Rater Agreement on the Various Rating Scale Traits and on Average Score, for

XV.

Both Samples, with Pay Grade

in as a Vari­

able and with Pay Grade Held

Constant . . .

80

Relationship Between Performance Check List (Form RF 107) Total Scores and Scores on Individual Traits of the Rating Scale (Form RF 101), General Classification Test Scores, and Selected Biographical Information, with the Influence of Pay Grade Held C o n s t a n t .......................

86

CHAPTER I

THE PROBLEM AND DEFINITION OF TERMS I.

INTRODUCTION

Development of adequate criteria of performance has been one of the most neglected areas of endeavor in applied psychological research.

In recent years, therefore, empha­

sis has shifted somewhat to the development of performance criteria, with the hope that already elaborate testing pro­ grams will become more and more predictive with increased refinement of criteria. The final criterion of the adequacy of any perform­ ance is necessarily based on human judgment.

Every crite­

rion depends, somewhere along the line, on evaluation by individuals in terms which are in line with the past expe­ rience and expectations of those individuals.

Even so-

called ’’objective'1 criteria, based on rate or quantity of performance, are superior to conventional judgments of ade­ quacy only in that they have reference to more refined measuring procedures than the inexact ordinal scales into which most types of human judgments fall. The problem of criteria development, then, becomes the problem of gathering human judgments in such a way that they will be consistent and have reference to a common

2 meaningful scale of units of some type.

One of the tradi­

tional frontal attacks on this problem has been the devel­ opment of rating devices.

The best that can be said of

past rating scales is that they have met with moderate suc­ cess.

And yet continued use and development of such scales

in practical situations indicates that, despite admitted limitations, the rating scale is destined to be one major line of approach in the development of performance criteria. It is hypothesized that the failure of past rating devices was due, in part at least, to the inability of the device to organize the raters1 judgments in terms of reliable ref­ erence points which have the same meaning for all raters at the same time, and for any given rater from one time to the next.

II.

THE PROBLEM

Statement of the problem.

It was the purpose of this

study to develop, and investigate the usefulness of, a rat­ ing scale which could be used to evaluate the performance of enlisted naval personnel aboard ship.

The project was

one part of a four-fold program designed to develop perform­ ance criteria.

The other aspects involved aptitude testing,,

development of performance tests, and development of per­ formance check-lists.

The research was made possible through

a grant from the Office of Naval Research. Accomplishing the outlined purpose of this study would involve: 1.

*

Investigation of the characteristics and useful­

ness of existing rating scales. 2.

Development of the most promising format in view

of Step 1. 3.

Selection and definition of traits which seemed

to be most appropriate in evaluating performance of naval personnel. 4. .Administration of the scale to an appropriate sample of men, using an appropriate sample of raters. 5«

Development of scoring procedures and determina­

tion of averages and dispersions of ratings. 6.

Determination of intercorrelations of trait

ratings and the nature of the factors accounting for them. 7«

Determination of the extent to which one rater

agreed with another, and the extent to which each rater agreed with himself on subsequent administrations of the scale. 8.

Determination of the extent to which the ratings

were predictive of outside criteria of performance. Importance of the study.

No program of research on

selection and evaluation is better than its criterion of

4 performance.

Increased extent and refinement of psycho­

logical tests have not been matched in the development of criteria.

In the Navy, criteria of school performance

were fairly well developed, but criteria of shipboard per­ formance were almost entirely lacking.

Quarterly evalua­

tion marks were relatively meaningless and useless in a practical sense because of their small dispersions and marked modality at the very top of the scale.

Marks were

assigned with little common meaning and without the aid of objective points of reference.

The basis for promotion

rested largely with unsystematized impressions of officers and petty-officers.

If these impressions were brought to­

gether and formulated according to common points of ref­ erence, a step toward the development of a valid and re­ liable criterion of performance would be made.

It was

hoped that the development of an adequate rating device would accomplish this end in part, thereby systematizing the ultimate basis for all criteria— judgment of superiors. This phase of criteria development was complemented by two others--that of performance check lists and per­ formance (achievement) tests.

It was hoped that through

the operation of these three phases of a criterion program, the Navy would have available the basis for the validation of elaborate psychological testing programs, for evaluat­ ing the programs at the various training schools, and for

5 setting up a sound promotional system which would be of benefit to both the efficiency and the morale of the service.

III.

DEFINITION OF TERMS USED

Pay grade or rate.

Pay grade or more commonly, rate,

refers to the level of advancement of a man in the navy. The following rates proceed from highest to lowest pay grade: Chief Petty Officer, 1st Class Petty Officer, 2nd Class Petty Officer, 3rd Class Petty Officer, and Striker.

A striker is

a non-rated man and falls in the lowest pay grade--he is an apprentice.

Throughout the text the term pay grade will be

substituted for the term rate in order to avoid confusing rate with the actual ratings assigned. Reliability.

Reliability has been defined many ways

in rating scale studies.

In this text reliability will

refer to consistency from one rating to the next as indicat­ ed by the correlation of each rater's judgments at a given time with the same rater's judgments at some later time.

It

is reliability in the re-test sense. Objectivity.

The objectivity of the scale will be

defined as the extent to which the raters agree among them­ selves on a given man's abilities.

It may be thought of as

6 inter-rater agreement while reliability has been defined as intra-rater agreement. ' STEN-score.

In order to equate means and varia­

bilities of the various raters, it was necessary to reduce all raw ratings to standard score form.

For convenience

these were further reduced to a ten-point scale (the STEN scale) with values ranging from 0 to 9 inclusive and with a mean of 4.5-

In all cases, high STEN scores indicate

the possession of desirable traits.

IV.

ORGANIZATION OF REMAINDER OF THE DISSERTATION

In Chapter II a review is made of the pertinent literature.

Due to the vast amount of literature on the

subject of merit rating, only articles appearing within the last twenty-five years were reviewed during this study. Attention was concentrated on those studies which reported the reliability, objectivity, and/or validity of the scales in quantitative meaningful terms.

Reporting in

Chapter II was confined to recent articles which seemed to represent significant progress in rating-scale develop­ ment . Chapter III contains a description of the format of the scale, the traits selected in making up the scale, the procedure in making up descriptive statements, the

7 instructions to raters, and a statement of the changes made in the second form of the scale.

The populations of

raters and ratees are described, and the procedures for scoring the scale are discussed. Chapter IV deals with the means and variabilities derived from the first sample of ratings. In Chapter V the methods of inter-correlating the traits are described.

Three matrices of intercorrelations

are presented for each of two samples of data. tion of the various matrices is explained.

The deriva­

Finally, the

results of six factor analyses— three on each sample of data--are presented and interpreted. Chapter VI is concerned with the objectivity of the rating scale.

The procedures for determining inter-rater

agreement are described, and results are given and discussed. In Chapter VII the method for determining the re­ liability of the scale is described, and results are report­ ed.

The validity of the scale is considered in view of its

correlation with outside criteria of performance. Chapter VIII contains a summary of the study, the conclusions to be derived, and suggestions for further re­ search in the establishment of performance criteria through merit rating.

CHAPTER II

REVIEW OF THE LITERATURE In order that the review of an extensive body of literature remain a reasonable task, it was necessary to be somewhat selective in choosing a representative sample of studies to report.

The studies reported in this chapter

were chosen because they met at least two criteria:

(l)

they were relatively recent, and (2) they represented what appeared to be either an unusually exhaustive study in terras of the statistics computed, or else appeared to be representative of particular progress in rating-scale con­ struction, or both.

It is believed that the above restric­

tions were justifiable on the assumption that the contribu­ tions of the older studies will have been retained in later work, and that the merit of any program not reported in quantitative terms was not determinable on the basis of the verbal descriptions alone.

Perusal of the literature in­

dicated that even the merit of some scales reported in quantitative terms was hard to determine because of the many and varied techniques of arriving at estimations of reliability and/or validity. The most comprehensive bibliography of rating-scale

9 literature known to the writer was compiled by Mahler.1 In it he points out that the growth of interest in meritrating during recent years stems from a gradual realiza­ tion on the part of administrative personnel that the establishment; of an efficient working force requires tha^L each individual employee receive recognition and reward in proportion to his relative performance.

In addition

to the increased emphasis on rating methods due to.the need for evaluating performance of present personnel, it appears likely that the ever-increasing need for criteria of performance required for selection and placement pro­ grams will stimulate further interest in merit-rating tech­ niques.

In recent years this intensified interest has re­

sulted in a good deal of search for the ideal method.

As

Mahler points out, it is unlikely that a universal method will be found which will be satisfactory in all situations and under many conditions.

Instead, perhaps the best that

can be done is to incorporate into the scales those features of other approaches which have demonstrated usefulness, and then tailor the scale to suit the particular needs of each

1 Walter R. Mahler, Twenty Years of Merit Rating (1926-19^6), (New York: The Psychological Corporation, 1947), 73 PP.

10 new situation.

It was the purpose of this review to de­

termine what, if any, such features were most promising. Graphic devices.

(Linear, alphabetic, numerical,

graphic, and defined distribution scales).

Undoubtedly the

oldest approach to performance evaluation is some type of graphic rating scale, utilizing or implying a continuum line graduated either by adjectives or more complete de­ scriptive statements indicating the degree to which the trait in question is possessed.

This general type of scale

was used over 160 years ago by the Dublin Evening Post to p rate Irish Legislators, and although it may seem like lack of progress, is essentially the type to be reported in this study.

Certainly it may be said that no other general ap­

proach to the problem has as yet demonstrated clear superi­ ority over the well-constructed graphic device. Two studies in which graphic-type scales were used have been selected for reporting due to their completeness from the standpoint of statistical analysis, and because of their similarity to the present study.

They are also

among the best studies which have been made, in terms of the excellence of results obtained.

D. Journal, 7:130-131. 1928.

Hackett, "Rating Legislators," Personnel

11 Bolanovich^ analyzed extensively the ratings gather­ ed on 143 field engineers at the Radio Corporation of Amer­ ica.

The following fourteen, traits comprised the scale:

Personality, Personal Appearance, Punctuality, Thoroughness, Efficiency, Resourcefulness, Dependability, Cooperation, Job Attitude, Technical Ability, Sales Ability, Organizing Ability, Judgment, and Desire for Self-Improvement.

Follow­

ing the definition of each trait was a five-point scale indicating degree of possession.

Systematicerrors

were

reduced by randomizing the serial

order of thenumerical

values ((e.g. (3) (2) (A) (1) (5)) which the rater had to check.

An attempt was made to normalize these ratings. A factor analysis of the trait intercorrelations

yielded six factors: 1.

Attendance to details

2.

Ability to do the Job

3.

Sales ability

4.

Conscientiousness

5.

Organizing or systematic tendency

6.

Social intelligence

The reliability of the items was estimated from the

^ D. J. Bolanovich, "Statistical Analysis of An Industrial Relations Chart," Journal of Applied Psychology, 30:22-31, 1946.

12 calculated communalities, and thought to be at least .80 . A maximum multiple correlation of .81 was obtained between a criterion of recommendation for promotion and the traits of Personality, Efficiency, Resourcefulness, Cooperation, Job Attitude, Sales ability and Organizing Ability.

In all

probability, such excellent results reflect well-trained, sophisticated raters.

L

A study reported by Ewart, Seashore, and Tiffin2*- is,

unfortunately, more typical of the results usually obtained with conventional-type rating scales.

A graphic scale,

containing the following eleven traits, was used in evaluat­ ing 1120 men in an industrial plant:

Safety, Knowledge of

the Job, Versatility, Accuracy, Productivity, Overall Job Performance, Industriousness, Initiative, Judgment, Coopera­ tion, Personality, and Health.

The intercorrelations of

these traits were .70 to .80 on the average.

The reliability

of the scale was not reported. A factor analysis of the intercorrelations was per­ formed using the centroid method.

Factor I accounted for

most of the variance and was labeled "Ability to do present job."

It was most heavily defined by Overall Performance,

Ewart, S. E. Seashore, and J. Tiff in, ?,A Factor Analysis of An Industrial Merit Rating Scale," Journal of Applied Psychology, 25:481-486, 1941.

13 Productivity, and Industriousness. Factor II was rather poorly defined but described as ‘'Ability or knowledge possessed over and above the re­ quirements of the specific job."

The suggestion was made

that training of raters might enable them to rate workers on more than the two factors identified. The only other rating scale study that has been re­ ported and carried as far as factorial analysis was that by Chi.5

Using a graphic scale of twenty personality

traits, five teachers rated 1 0 0 pupils.

The re-rate re­

liability from one semester to the next was . 8 1 .

The inter­

rater agreement was indicated by a correlation of .47. Two general factors were identified in the analysis as well w.

as the specific factor associated with each variable. These general factors were identified interestingly enough as "Halo" and "Volition to Achieve." Ranking and comparison devices.

Probably the best

known program in which ranking and comparison techniques were used was the man-to-man scale developed to evaluate Army Officers during the first World War.

Since then this

technique seems to have been in little use as a method of

X ^ p. chi, "Statistical Analysis of Personality Ratings," Journal of Experimental Education, 5:229-245> 1937.

14 evaluating performance, but some of the most recentlydeveloped devices have revived this basic approach.

It

seems to be the most natural of all the judgmental proce­ dures, and as such might well be incorporated into rating devices of all kinds. W. S. Davis^ reports success with an adaptation of the original man-to-man scale.

All employees in a given

department are rated on one "factor" at a time.

This tech­

nique reportedly equalized ratings between departments, was more objective than traditional methods, and was less sus­ ceptible to "halo" effect. Exemplary of a recent trend toward the paired-comparison technique is a study by Lawshe, et al.7

Twenty-

four offset pressmen were rated on overall performance.

A

slip of paper was prepared for all possible pairs of names (276 slips in all), and these were stapled together into a rating booklet.

Preparation time was not prohibitive, and

a rater typically took about 30 minutes to make his 276 choices.

The reliability of the ratings was very high,

averaging .97 for three supervisors.

Inter-rater agreement

y 6 william S. Davis, "Factor Merit Hating System," Personnel, 22:309-319* 19^6.

7 C. H. Lawshe, N. C. Kephart, and E. J. McCormick, "The Paired Comparison Technique for Rating Performance of Industrial Employees," Journal of Applied Psychology,

X

33 :69- 77 * 19^ 9 .

15 (on the rank order of the men) also was very high for rating devices, averaging .83*

So far as the writer knows,

these are among the highest values for reliability and ob­ jectivity that have been obtained with rating scales.

The

device remains limited, however, to reasonably small groups, and to a very small number of traits.

Lawshe, et al., feel

justified in using a single trait of overall performance due to the high degree of communality among traits typical­ ly measured by rating scales.

In evaluating their unusual­

ly fine results, one must bear in mind the small samples of both raters and ratees which were involved. Other recent studies using either ranking or com­ parison techniques are those of McMurry and Johnson,® Stronck,^ and Ferguson,10 and the techniques have been ef­ fectively combined In a procedure described by Bittner and Rundquist.11

In this approach, the group to be rated is

first broken up into a series of randomized sub-groups

® R. N. McMurry, and D. L. Johnson, "Development of Instruments for Selecting and Placing Factory Employees," Advanced Management, 1 0 : 1 1 3 - 1 2 0 , 19^5« 9 H. N. Stronck, "internal Bank Management Controls; Control of Personnel and Salaries," Bankers Monthly, 56: 719-721, 1939. ^ 10 L. W. Ferguson, "A Brief Description of a Reliable Criterion of Job Performance," Journal of Psychology, 25: 389-399, 19^8 . ^ ^ R. H. Bittner and E. A. Rundquist, "The Rank-Comparison Rating Method," Journal of Applied Psychology, 34: 171-177, 1950.

16 which do not exceed 20 in size.

The members of each sub­

group are then ranked from first to last in the trait in question.

This is accomplished by selecting the best and

poorest men first* then picking the best and poorest men from those who remain* and so forth until all men have been ranked by building from both ends toward the middle. When all sub-groups have been ranked in this fashion* they are merged by a series of paired comparisons.

For example*

the top man in Sub-Group One is compared with the top man in Sub-Group Two.

If the man from Sub-Group One is judged

the better of the two, he becomes the best man in SubGroup One plus Two.

The second man in Sub Group One is now

paired with the first man in Sub-Group Two* the choice be­ tween the two is made, and this process continues until the two sub-groups have been completely merged and a new com­ plete ranking has been accomplished.

In a similar fashion

the remaining sub-groups are merged until one final com­ plete group* ranked from top to bottom* is achieved. The authors claim the following advantages for the rank-comparison method:

(l) it is easily understood by the

raters; (2) raters like the method and have confidence in it; (3) it can be applied to large groups; and (4) it re­ quires very little time.

In addition they reported a re­

liability of .91 for one foreman who ranked 75 factory women twice during a three-month interval* and a reliability

17 of .89 for another foreman who made two ratings one month apart on 31 factory women.

The inter-rater agreement using

this approach has ranged from .45 to .73 for several pairs of raters. Another recent development using these techniques is that of buddy ratings or evaluation by peers as reported 12 by Wherry and Fryer. In this procedure, the peers of the men to be rated are asked to nominate the four or five best men in the group and also the four or five poorest men with respect to the trait under consideration.

Scores

are obtained by subtracting the number of "poor" from the number of "good” mentions and dividing by the total possible number of mentions.

In the Wherry and Fryer study, two

groups of officer candidates were evaluated on the trait of leadership by means of buddy ratings.

The students of each

class were asked to select the five men in their section who demonstrated personality traits least desirable in an Army Officer.

The reliability of these nominations was re­

ported as .75 for re-ratings after one month and .58 after four months.

The correlation between buddy ratings and re­

tention (of at least two months) at the school was .70 , and

V V y * 1 2 R. H. Wherry and D. H. Fryer, "Buddy Ratings: Popularity Contest or Leadership Criteria," Personnel Psychology, 2:147-159, 1949-

18 between buddy ratings and graduation from school was .49. The ratings had a low correlation with academic grades which predicted retention in school to about the same extent (.50).

Prom their analysis, Wherry and Fryer were able to

conclude: 1.

Buddy ratings were the purest measure of leader­

ship (from a factor study). 2.

Co-workers were able, after one month, to eval­

uate leadership to a degree equalled by instructors only after four months of observation, 3*

Buddy ratings were more reliable than were

graphic ratings of leadership. Check-list devices.

In an effort to get closer to

objective evaluations of the actual performance of ratees, a relatively recent development in criteria development, has been the performance check-list.

This type of device con­

tains specific items referring to knowledge of, or tech­ nique on, the job which can be checked either on an all-ornone basis, or on some scale involving the frequency with which the particular behavior is exhibited. A good example of this type of development is that of Knauft.^3

^

He reported construction of weighted check-lists

13 E. B. Knauft, "Construction and Use of Weighted Check List Rating Scales for Two Industrial Situations," Journal of Applied Psychology, 32:63-70, 1948.

19 for laundry-press operators and for bake-shop managers. The procedure was to obtain from personnel in supervisory positions lists of statements which described performance of the job in question.

These statements were edited,

duplicates were eliminated, and then they were sorted by the same supervisory personnel into nine piles on a goodto-poor continuum according to the Thurstone equal-appear­ ing interval technique.

Median scale values were computed

for each statement as were the semi-interquartile ranges. All statements with Q, values of one or more were eliminated, and the final selection of items was made from those re­ maining. The reliability of the press-operator check-list was

.87 based on scores made on two forms by 118 men who were rated by 15 laundry managers.

The reliability of the

baker*s check-list averaged .80 for two administrations. Using the total scale (both forms) this reliability rose to

.93 for the pressmen and .88 for the bakers. of inter-rater agreement was reported.

Only one study

The ratings of two

bakery managers correlated .81 for 35 men rated in common. The critical-incident technique.

Related to the

general problem of developing rating scales and to check­ lists in particular, is an approach recently outlined by

20

Flanagan and known as the critical-incident technique. The essence of this procedure is to establish the "critical requirements" of a job or activity through direct observa­ tions by participants in or supervisors of the job or ac­ tivity.

A critical requirement is defined by Flanagan as

a requirement which is crucial in the sense that it has been responsible for outstandingly effective or definitely unsatisfactory performance of an important part of the job or activity in question.

Presumably, a critical require­

ment differs from the requirements which appear important but in practice have no important effect on the perform­ ance of the specified activity. In addition to such critical requirements in terms of behavior, it is considered necessary to determine crit­ ical requirements of the work in terms of aptitude, train­ ing, information, attitudes, habits, skills, and abilities. Flanagan reported no figures but claimed that the technique has been used successfully in establishing critical require­ ments for U. S. Air Force Officers, and for research workers in the laboratory.

J. c. Flanagan, "Critical Requirements: A New Approach to Employee Evaluation," Personnel Psychology,

2 :419- 425 > 1949 .

21

Forced-choice technique.

One of the most promising

of recently developed approaches is the forced-choice per­ formance report.

This technique was developed in the Army

toward the end of the last war in an effort to circumvent many of the problems encountered with conventional-type scales.

According to R i c h a r d s o n 1 ^ the superiority of this

approach lies in the fact that reporting of the on the job performance of a man is separated from the evaluation of the relative significance of that performance.

The report­

ing supervisor is not asked to consider whether his report is favorable or unfavorable to his subordinate--his task is simply to describe job behavior as accurately as possible. The forced-choice adherents believe, then, that the task of reporting job behavior may be adversely affected if it is complicated at the same time by the task of evaluating the significance of that performance, or of ranking the man being rated in relation to.his co-workers.

In so evaluat­

ing, fact and inference may be confused. The procedure in constructing a forced-choice peri c.

formance report has been outlined by Sissonxu as follows:

)( 15 M. w. Richardson, "Forced Choice Performance Reports," Personnel, 26:205-212, 19^9* E. D. Sisson, "Forced Choice--The New Army Rating," Personnel Psychology, 1:365-381, 19^8.

22

1.

Collection of brief essay descriptions of suc­

cessful and unsuccessful personnel. 2.

Preparation of a complete list of descriptive

phrases or adjectives culled from these essays, and the administration of this list to a representative group of personnel. 3.. Determination of a preference index and a dis­ crimination index for each descriptive phrase or adjective. 4.

Selection of pairs of phrases or adjectives

such that they appear of equal value to the rater (prefer­ ence index) but differ in their significance for success as an officer (discrimination index). « 5-

Assembling of pairs of phrases so selected into

tetrads. 6.

Item-selection against an external criterion

and cross-validation of selected items. The result of these procedures is a scale comprised of a series of tetrads of statements, two favorable, two unfavorable, such as the following: 1.

Commands respect by his actions

2.

Coolheaded

3.

Indifferent

4.

Overbearing

23 From each tetrad the rater selects the statement S} that most and the one which least describes the ratee.

The

two favorable statements supposedly are of equal preference value, but one discriminates between good and poor person-

j / t i e l while the other does not.

The same is true of the two

unfavorable statements. Unfortunately, Sisson did not report the reliability, objectivity and validity figures obtained with the use of this promising technique in the Army, except to say that the forced-choice scale was considerably more reliable and valid than any device used in the Army to this time. Richardson,1^ writing on the use of the same type of scale in industrial situations, reported reliability coefficients above .90 and validity coefficients of from .62 to .7^ on populations of supervisory personnel which were hetero­ geneous with respect to types of work supervised, super­ visory level, and length of service.

The correlation be­

tween two ratings made by different persons using differ­ ent forms was .69*

Distributions were relatively free

from negative skewness. Comparisons of techniques.

In completing this review

of recent progress in merit-rating techniques, two studies

irf Richardson, op. cit., p. 211.

24 should be mentioned which offer a view of the relative effectiveness of two of the basic approaches. Hausman, Begley, and Parris

determined the inter-

rater agreement on three measures of proficiency devised for evaluating the performance of B-29 mechanics.

A B-29

check-list of 35 specific items including four steps of assistance necessary for the performance of each task was found to have an inter-rater agreement of .65 .

Finally,

an Overall Performance Rating Scale which questioned the supervisor’s desire to keep the man in question, had an inter-rater agreement of .68 . As far as the objectivity of these instruments was concerned, then, there seemed to be little to choose be­ tween them.

In predicting the assessment of trained eval­

uators, however, the check-list seemed to be slightly su­ perior.

For 62 cases, the check-list scores correlated .36

with the evaluators’ opinions, while the Work Habits and Attitude Rating Scale and the Overall Performance Rating Scale correlated .41 and .40 respectively.

H. J. Hausman, J. T. Begley, and H. L. Parris, ’’Selected Measures of Proficiency for B-29 Mechanics: Study No. 1,” Human Resources Research Laboratories Report No. 7* (Washington 25 > D. C., July, 1949)•

25 M a h l e r 1^

did an experimental study on the effective­

ness of a weighted check-list and a conventional five-degree rating scale in evaluating the performance of sales person­ nel.

The following results were reported: 1.

The distributions of ratings were spread more

uniformly over a wider area and were not as negatively skewed with the rating scale as with the check-list. 2.

There was no significant difference in the ob­

jectivity of the two scales.

Inter-rater agreement was .73

for the rating scale and .71 for the check-list. 3.

(N = 1 0 2 )

There was no significant difference in the re­

liability of the two devices.

Over a six-month interval

the reliability of the rating scale was .51 * that of the check-list .47* 4.

(N = 75)

In predicting the only available outside criteri­

on, volume of sales, which admittedly is only a part of overall sales performance, the rating scale had an insigni­ ficant edge (.48) over the check-list (.41). In addition to these results, eleven raters were poll­ ed on their preference for the two systems.

They were divid­

ed on which method took the most time, and on which form was

J Mahler, "An Experimental Study of Two Methods of Rating Employees," Personnel, 2 5 : 2 1 1 - 2 1 9 * 1 9 4 8 .

26 easiest to discuss with the employees.

All raters indi­

cated a higher degree of confidence in the ratings ac­ complished with the rating scale than with the check-list method.

They felt they could give a more definite, ac­

curate picture with the rating scale, and liked the oppor­ tunity to rate employees on one trait at a time.

As a

regular method of evaluation, all eleven raters preferred the rating scale. In addition to the studies reviewed which pertain in general to rating procedures, two studies have been made which have to do with the specific type of rating format to be reported in this dissertation. Stevens and Wonderlic2^ first reported the advantages of this format in 1934.

It differs from conventional forms

only in the fact that all men are rated in one trait at a time, rather than the more familiar procedure in which one man is rated in all traits at the same time.

This is more

than a superficial difference in methodology since it per­ mits the rater to rate each man in relation to the others in the group, rather than requiring him to rely on more or less ambiguous descriptions of behavior as absolute landmarks

^20 g# Stevens and E. P. Wonderlic, "An Effective Revision of the F&ting Scale Technique," Personnel Journal, 13:125-134, 1934.

27 along the continuum. After considerable experience with this rating form in business, Stevens and Wonderlie reported that judges' ratings were consistent, and that halo effect and acquaint­ anceship factors did not materially affect judgments. In 19^2, Gi l i n s k i ^ tested this conclusion on reduc­ tion of halo effect.

Using a graphic rating scale, ten

mature male faces on photographic slides were judged on: (1 ) general impression, ( 2 ) honesty, and (3 ) courtesy.

Rat­

ings were first made on all three traits at once for each face, then on only one of the traits each time a slide was presented.. The size of the Pearson r between each pair of traits was secured under each of the two conditions and tak­ en as an index of the halo effect.

It was found that these

correlations were substantially reduced by judging all per­ sons on one trait at a time.

Transformation of the indi­

vidual correlations into Fisher Z-scores yielded a mean difference between conditions significant at the one per cent level of eorifidence. In addition to the results obtained by Stevens and Wonderlie and later by Gilinski, a rating scale in which all

/ A. S. Gilinski, "The Influence of the Procedure of Judging on the Halo Effect," American Psychologist, 2 :3 0 9 -3 1 9 ,

1947.

28 men are rated on one trait at a single time had been recommended as long ago as 1936 by Guilford.^

Such a form

also was tried out in the Army Air Forces Aviation Psy­ chology Research Program, although the success of that trial has not been reported. A review of the recent literature on rating devices has indicated that several promising advancements in tech­ nique have been made in recent years.

In an effort to get

closer to actual job performance, several varieties of per­ formance check-lists have been built which seem to repre­ sent an advancement over conventional rating scales partic­ ularly from the standpoint of reliability and objectivity. Possible disadvantages of this technique are lack of spread of scores and excessive negative skewing.

566 p p .

29 some situations, it is conceivable that raters would strong­ ly object to describing behavior without at the same time knowing whether the description was a favorable or unfavor­ able indicator. v In addition to the moves to get closer to the report­ ing of objective behavior and away from direct evaluation of behavior, the recent literature indicates a trend toward use of the fundamental psychological scaling techniques--those of ranking, paired-comparisons, and equal-appearing inter­ vals.

The most natural procedure for judging the relative

worth of two men is to compare them directly on some perti­ nent attribute.

The most natural procedure for judging the

relative worth of more than two men is to rank the men from first to last on the quality in question, making pairedcomparisons where necessary. / The performance check-list, and the forced-choice technique represent attempts to set up absolute standards against which evaluations can be accurately and reliably made.

Lacking absolute standards, it appears that the ref­

erence points having the most significant and reliable mean­ ing for the raters are the men who are to be rated themselves. The paired-comparison technique, the buddy rating technique, and the rank-comparison rating method utilize this principle. The difficulty with these three procedures is that they re­ quire much time, effort, and expense if ratings are to be

30 gathered on more than one trait. As an initial attack on the whole problem of ship­ board performance criteria, the present study required not only use of the most promising approach to rating, but broad coverage of possible appropriate traits as well.

The

format which seemed to offer the most promise of attaining this goal was that recommended by

Guilford,

^3 and tested

by Stevens and Wonderlie,^ and by G ilinski.^

It was hoped

that development of this approach would provide for the most systematic and reliable collection of subjective impressions of performance, while a performance check-list, to be devel­ oped as a second phase of this project, would represent an attempt to develop absolute standards of job performance.

Guilford, c>p. cit.

24 Stevens and Wonderlie, op. cit. Gilinski, op. cit.

CHAPTER III

DESCRIPTION OF THE RATING SCALE AND THE NATURE OF THE SAMPLES The format.

The original (RF 101) and revised (RF

105) forms of the Rating Scale to be described in this chapter may be examined in Appendix A.

As was indicated

in the previous chapter, It differs from typical rating scales primarily in the fact that it provides for the rat­ ing of all men at the same time on a separate page.

It Is

believed that this feature helps provide for the most nat­ ural rating procedures--those of paired comparisons and ranking.

It is hypothesized that the format also reduces

mechanical tendencies to rate a man in the same place from trait to trait. Selection of the traits.

A broad selection of per­

formance and adjustment traits was made in an effort to sample as many aspects of shipboard behavior as possible. It was considered better to be over-inclusive rather than run a risk of being under-inclusive.

The scale was design­

ed originally for use with a population which was very heterogeneous with respect to age, length in service, and type of job performed.

Thus descriptions of performance

were couched in very general terms, with the expectation

32 that performance tests and performance check-lists which were being developed under the same Navy contract would measure the more specific aspects of shipboard perform­ ance.

The final selection of traits was a result of a

combination of ideas from previous rating-scale studies and from discussions with key naval personnel who were in a position to make valuable suggestions on the importance of certain traits.

The following traits comprised the

original scale> RF 101: Social Adjustment, Quality of Work Neatness of Appearance Cooperation s Watch Standing Knowledge of the Job Discipline ^ Application and Initiative ^ Dependability Adaptability Leadership ^ Overall Efficiency Neatness of Work ^ Ability to be Taught Care of Equipment ^ Ability to Troubleshoot ^ Sincerity in the Job * Manual Skill Overall Efficiency in Rate

1 Both "Overall Efficiency" and "Overall Efficiency In Rate" (Pay Grade) were Included in the scale in an effort to get the best overall rating. Some raters had thought they could do a better job by considering each pay grade as a group.

33 The second form of the scale (RF 105) contained all of the above traits with the exception of Neatness of Ap­ pearance , Dependability, Adaptability, Ability to Trouble­ shoot, and Manual Skill.

These traits were eliminated

from the second form when examination of trait intercorrela­ tions and factor results indicated that they were largely duplicating the contribution of certain others. Wording of the traits.

An effort was made to define

the traits in simple, short phrases with as much reference to objective behavior as possible.

Each trait was defined

at the top of the page and this was followed by four groups of statements of degree down the side of the page.

In an

effort to decrease the n e g a t i v e ^ skewness so often found in distributions of ratings, only the bottom-most group of statements indicated non-acceptable performance.

The next

higher group comprised the "barely acceptable" category, and the top two groups of statements indicated superior perform­ ance, the upper-most being extreme.

It was hypothesized

that raters would show less reluctance to placing a man in the ‘barely acceptable" category than they would to assign­ ing him an unacceptable rating. In the original scale the order from good to bad was reversed sometimes in order to combat any tendency to rate \

in the same places from page to page.

In practice, this

34 procedure tended to confuse a certain proportion of the raters (and invalidate their ratings), so in the second form the failure category appeared at the bottom of every page. The samples.

The first Rating Scale (RF 1 0 1 ) was

administered to 187 Electrician’s Mates and Enginemen (Strikers through Chief Petty Officers) in Submarine Squad­ rons 3 and 7 at San Diego.

The personnel were selected

solely on the basis of availability.

Part of the sample

was gathered in January and part in April of 1949*

Crews

from the following submarines participated: U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. U.S.S.

Diodon Ronquil Baya Carp Barbero Capitaine Redfish

U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. U.S.S.

Segundo Pomodon Caiman Charr Sea Lion Blower Cusk

The second administration of the Rating Scale (RF 1 0 5 ) took place in August and September, 1949-

This time 286

Electrician’s Mates and Enginemen were rated.

The following

submarines were represented in the sample: U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. •U.S.S.

Blenny Carbonero Carp Catfish Cusk Redfish

The raters.

U.S.S. U.S.S. U.S.S. U.S.S. U.S.S. U.S.S.

Ronquil Segundo Pomodon Remora Barbero Volador

An effort was made to get as many raters

35 as possible who felt they knew the men' well enough to rate them fairly.

In most instances this meant the Engineering

Officer, the two leading Chief Electrician's Mates, the two leading Chief Enginemen, and in some instances the Executive Officer or the Assistant Engineering Officer.

The great

majority of men were rated by three people. Instructions to the raters.

The written instruc­

tions, together with sample ratings, which were given to all raters may be examined on pages 105> 106 and 107 of Appendix A.

The instructions were essentially the same

for both Form RF 101 and Form RF 105 » except that for the latter form the instruction was given to avoid tie ratings if- at all possible. During the first administration of the scale, either the writer or Dr. Clark L. Wilson personally instructed most of the raters on what was expected of them.

Some com­

mon pitfalls of rating such as excessive leniency, halo effect, and high interrelationships between traits due to logical errors were explained to the raters in simple terms, and they were asked to guard against them.

During the sec­

ond administration, the instructions appearing in the scale had to carry the load since it was not possible to instruct the raters personally.

36 Scoring of the scales.

The distance of each check

mark from the bottom of the continuum line was converted into a numerical, score by simple measurement with a centi­ meter rule.

Possible raw scores ranged from zero to 13.0,

each check mark being read to the nearest half centimeter. The raw scores then were converted into standard scores in order to equate the means of all raters, and the standard scores were in turn converted into STEN-scores which pro­ vided a workable scale of positive whole numbers. It is recognized that this scoring procedure in­ volved the assumption that the distances between groups of statements were equal.

This assumption was not necessary,

however, in the statistical analyses of results that follow, in which all scores were placed simply into an upper or lower half with respect to the median.

CHAPTER IV

MEANS AND DISPERSIONS OF RATINGS In Table I, on the following page, mean ratings and standard deviations are reported for each trait in the orig­ inal form of the rating scale.

To indicate the variability

of these statistics, the middle 80 per cent range both of the raters’ means and their standard deviations is included. In many rating, scales there is a notable tendency for raters to reduce the usefulness of their evaluations by: (l) assigning high ratings, on the average, and (2) indicat­ ing little or no difference between men (small dispersions). As indicated in the previous chapter, it was hoped that the format of this scale, together with the manner in which the trait descriptions were graduated, would appreciably reduce both these tendencies. In large part these expectations were realized.

There

was, of course, variability in the leniency--stringency fac­ tor, mean ratings for individual raters extending all the way from 5-9 to 10.0 on the 13-point scale.

However, the

mean of mean ratings for 43 raters in the first sample was 7»8--quite acceptably near the center of the continuum. In general, the dispersions of ratings were suffi­ ciently large to provide for meaningful and useful

TABLE I MEAN AND MIDDLE 80 PER CENT RANGE OF RATINGS, MEAN AND MIDDLE 80 PER CENT RANGE OF STANDARD DEVIATIONS FOR EACH TRAIT (Figures represent results from 43 raters aboard 10 submarines and a total sample of 187 Enginemen and Electrician's Mates)

Trait 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13* 14. 1516 . 1718. 19* 20.

Social Adjustment Quality of Work Neatness of Appearance Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Rate Overall Means

Mean of all raters’ means 7.2 7.9 7.3 8.1 7.4 7.4 9.5 6.6 8.0 7.2 8.7 6.1 8.4

Middle 80 per cent range of means 5.8 5.6 4.7 6.4 5.2

3.8 7.8 4.8 6.2 4.9

6.9

8.6

4.1 6.5 5.2 6.6 6.5 5.6 6.8 6.1 7.1

7.80

5.93

7.2 8.4 8.6 7.3 8.5

7.6

_ -

9*6

1.8

10.0

1.9 1.9 2.1 2.1 2.3 1.9 1.9 1.9 2.1 1.9 2.3 2.0 1.8 1.9 1.8 2.0 2.0 1.7 1.9

-

8.8 9.5 9.4 9.4

-

11.0

-

8.8

-

10.2

-

-

-

Mean of all raters’ S.D.’S

9.4 10.4 7.9 10.0 9.2 10.2 10.5 9.2 10.2 9.4 10.2 9 *66

1.96

Middle 80 per cent range of S.D.’S

0.6 0.9 0.2 1.0 1.1 1.2 0.8 1.0 0.9 1.3 0.9 1.2 0.8 0.8

0.8 0.8 1.2 0.8

-

0.3

-

0.1

-

.84

-

2.6 2.9 2.9 3.2 3.0 3.5 3.2 3.0 2.9 3.1 3.1 3.4 3.3 2.7 3.2 3.1 3.3

3.5 2.7 3.2 3.09

00 00

39 discriminations.

There were a few raters who assigned a

large percentage of tie ratings, thus producing small standard deviations and little or no discrimination.

These

ratings, of course, were of little value in differentiating among men, and led to the instruction to avoid ties if at all possible during the second administration of the scale. The standard deviations of the individual raters for all traits combined ranged from .3^ to 3.21 on the average, and the mean of average standard deviations for all raters was

1 .96 . Considerable variability can be seen in the means and variability of scores from trait to trait.

This could be

due to one of several factors or in part due to all of them. The simplest hypothesis that presents itself is that the variability resulted simply from differences in extremeness of phraseology used from trait to trait.

No attempt was made

to equate the strength of the descriptive phrases by some technique such as Thurstone’s equal-appearing intervals. Something of this type should be done should a new form of the scale be constructed. A second possible explanation for the variability of the means from trait to trait is that some raters may have considered it more essential to assign high ratings in some traits than in others.

For example, a given rater might

reason that it was perfectly satisfactory to assign moderate

40 or even low ratings in a trait such as Manual Skill, but might consider it a reflection on his own ability if any­ thing but high ratings were assigned in a trait such as Discipline. A third hypothesis which might be offered to explain the variability in mean scores and dispersions of scores from trait to trait is that actual differences which in reality do occur are being reflected by the raters.

For

example, the mean score in Leadership may be low because a substantial proportion of the men rated actually have had no opportunity to display leadership. the variability may be large.

For the same reason

Similarly, the mean rating

in Discipline may be high because the majority of men offer no displinary problems.

The variability in Manual Skill

may be small because there is little opportunity to demon­ strate differences in motor skills in the particular jobs which were studied.

These three possibilities probably

represent the principal reasons for the variability in means and standard deviations from trait to trait. The means and dispersions of the second sample of ratings have not been computed.

From inspection there is

no indication that they differ substantially from those of - the first sample.

It is highly probable, however, that the

dispersions were somewhat larger since the instructions to avoid ties if possible appear to have been effective.

CHAPTER V

INTERCORRELATIONS OF TRAITS AND RESULTS OF THE FACTOR ANALYSES In the survey of the literature on rating scales which was reported in Chapter II, only three studies were found which had been carried as far as factorial analy­ sis. 1*2*3

These studies, with the exception of that by

Bolanovich, plus well-established indications of rating fallacies such as halo effect, logical errors, systematic errors, etc., leave one with the impression that seldom is more than one or two factors required to account for the high inter-trait correlations which typically are found in rating-scale studies. A part of the analysis of the present rating scale was to determine the magnitude of the inter-trait correla­ tions and to perform factorial analyses of the resulting matrices.

This was done for each of the two samples of

submarine personnel, under three conditions for each sam­ ple with a resultant total of six analyses.

The various

1 Bolanovich, op. c i t . ^ Chi, op. cit. 3 Ewart, Seashore, and Tiffin, op. c i t .

42 conditions of these analyses are described below. Intercorrelation of traits, Procedure

I.

In the

initial analysis traits were intercorrelated in a straight­ forward manner.

Individuals were assigned plus scores in

all traits in which they received average STEN-scores of five or better, and minus scores in all traits in which they received STEN-scores of four or less.

Tetrachoric coeffi­

cients were then computed. In addition to the Rating Scale traits, each m a n ’s General Classification Test Score (GCT), his age, length on board, education, and pay grade level were included in the matrix for analysis. In the Navy, pay grade is quite naturally actually related to most traits that make for success.

This fact

tends to increase the correlation between traits on a rat­ ing scale such as the one under discussion.

As in any or­

ganization, those subjects who are at the most advanced levels are rated higher because of their positions and, In turn, they have been advanced to those levels because they possess more desirable amounts of the traits that are neces­ sary. In view of this, and the relative naivete of the raters, one would expect the intercorrelations among the traits to be quite high.

This expectation was realized as

the matrices in Table II (first sample) and Table III (second

TABLE II INTERCORRELATIONS* OF RATING SCALE TRAITS, ABILITY CHECK LIST, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION INCLUDING PAY GRADE (FIRST SAMPLE) N = 187 Enginemen and Electrician's Mates Trait 1. 2. 3. 4. 3. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19• 20. 21. 22. 23 . 24. 25* 26.

1

Pay Grade Age .83 Length on Board .43 -.12 Education Social Adjustment .19 Quality of Work •57 .41 Neatness of Appearance Cooperation •3^ Watch Standing •71 .86 Knowledge of Job Discipline •38 Application and Initiative •33 .82 Judgment and Common Sense .80 Dependability .64 Adaptability .82 Leadership Overall Efficiency .56 Neatness of Work •53 .47 Ability to be Taught .60 Care of Equipment .82 Ability to Troubleshoot .62 Sincerity in Job Manual Skill •71 Overall Efficiency in Rate •63 Gen. Classification Test •3^ Ability Check-List •07

2

3

4

5

6

7

8

9

10

11

12

-

-

•23

-

.00 -.27 .08 .04 -.03 .48 .16 .06 -

•30 .31 .48 .68 .34

.29 .68 .64 .49

.72 .44 .46 .46 .52 •57 .42 .51 •57

.16 .14

* Tetrachoric Correlation Coefficients

.13 .14 .29 .40 .20 .22 .31 .25 .36 •35 .30 .15 .47 .29 .43 .34 .27 .25 -.01 .20

-.05 -.13 -.14 .01 .21 .01 .12 .01

.06 .04 .05 .08

-.08 .01 -.10 -.04 .00 .11 .07 .03

.44 .15

.61 •51 •35 •33 •55 .44 .47 •57 .48

.62

.42 .68 .74 •77 •51 •75

.81 .85 •77 •77

.85

.46 •83 .42 •79 .42 •79 .42 .81 •53 •83 .43 .74 .49 •79 .13 -.02 •09 .44

•33 .49 •35 .47 •32 •45 •53 .40 •43 •49 .44 .41 •56 .48 •39 •35 •49 .14

.06

.66 -47 •45 .84 •58 •63

•76 •54

.65 .80 .82

.65 .56

.81

.70 •65

.82

.69 .64 •55 •74 •54 •70 .04 •31

•75 •72 •74 •83 •72 .82

.62 •72

.16 •34

•38 •59 •93 .86 •79 .88 •75 .66 .68 •74 .86 •77 •77 •77 .27 •31

•52 •53 •53 •43 •55 •52 •37 •55 .68 •45 •56 .41 •55 .10 •27

-

.62 •71 .66

.62 •71 •73 •77 .68 •71 •89 •55 •83 .11 .20 U)

TABLE II (Continued)

Trait 13. l4. 15» 16. 17* 18 . 19* 20. 21. 22. 23. 24. 25» 26.

13

Judgment and Common Sense _ .88 Dependability Adaptability .83 .88 Leadership .82 Overall Efficiency Neatness of Work .78 Ability to be Taught •74 Care of Equipment .81 Ability to Troubleshoot .84 Sincerity in Job •83 •78 Manual Skill Overall Efficiency in Rate .85 Gen. Classification Test .17 Ability Check-List •30

14

15

16

17

18

19

20

21

22

23

24

25

-

.80 .87 .85 .81 .79

.78 .81

.85 .80 .84 .27 .41

-

•79

.89 .81 .87 .69 .86 .83 .76 .88 .15 •38

-

.77 .68 .64 .77 .85 •79 .74 .78 .17 •30

-

.81 .84 .82 .82 .74 .82 .76 .89 .81 .75 •83 .85 .89 .21 -.01 .44 .35 -

-

.75 .79 .76 .76 .85 .04 .24

-

•71

-

.81

.86

.65 .74 .20 .27

.80

-

.86

.78 .88 .77

.08 .27

.08 .22 .08 .50 .28 .35-.06 .

-

TABLE III INTERCORRELATIONS* OF RATING SCALE TRAITS, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION INCLUDING PAY GRADE (SECOND SAMPLE) N = 286 Enginemen and Electrician’s Mates Trait 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 13* 16 . 17* 18. 1920. 21.

Pay Grade Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

1

2

3

4

5

6.

7

8

9

10

_

.94

.12 - .06

-

.11 .00 .68

-.01 -.01

•57

.15

.60

.18

.78 •37 .58

.23 .04 .21

.07 .05 .04

.80

.18 .16

.36

*65 .63 .83

.69 •70 .66

.69 •95 .30

* Tetrachoric Correlation Coefficients

-

.08 .21

.46 .76

.47 •59 .84 .83 .74 .51

-

-.27

.72 .68 .51 .68 .67

.60 .61 .96 .19

.17 .05 .21 .13 .20 .21 .07 -.07

.16 .05 -.11 -.01

.06 .14 -.01 .02 .02 -.01 -.09 .24

-

.67 .74 .68 .64 .47

.67 .65

-

.84 .88 •91

.82 .79

.67 .87

.62

.67 .63 .69

•91 •92 .90 .84 .88

.70

.89

.58

.86 .88 .70 .18

.68

.67 .34 .11

.80 .78 .82 •77

.82 .82 .83 •73 .80 •57 .12

-

•83

.65 .82 •85

.82 .86 •83 .86 •85 .84

.85 •53 .21

-

•57

.80 •93 •95 •87 •71 .88 .86

.87 .89 .80 .18

-

•57 .62

.56 •54 •65 .64

.62 •53

•63 •27 .21

TABLE III (Continued)

Trait

11 . Application and Initiative 12. Judgment and Common Sense 13. 14. 15. 16 . IT. 18. 19. 20. 21.

Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

11

12

13

l4

15

16

17

18

19

20

21

-

.77 .75

.81 .76 .84

.80 •74 .74 .54 .13

-

•92 .84

.80 .84 .83

.88 .87 .79 .19

-

.90

-

.75 .83

.77 .87

.85 .88

.86

.90 •76 .24

-

.82

-

.81

.88 .80 .81 .78 . .86

.82 .85

.71 .20

.48 .27

.67 .12

.79

•76

.66 .11

-

.94

-

.62

.61

.23

.19

-

.14

-

^7

sample) indicate.

The intercorrelations in the second

sample are somewhat higher than those obtained from the first sample.

This was due probably to less personal in­

struction of the raters in the second sample, and to the greater dispersion of scores as a result of the instruction that no tie scores be given. A word is in order in regard to the reliability of the coefficients in these and subsequent calculations.

The

standard errors are less than the N ’s of 187 and 286 for the two samples would indicate.

This is true because of

the fact that the cell frequencies in the four-fold tables did not total N, but rather added up to the total number of pairs of ratings.

Since there were two to three raters

for every ratee, this total was considerably greater than N. Inspection of Tables II and III indicate that Pay Grade is highly associated with technical skill as reflect­ ed in the traits of Knowledge of the Job, Judgment and Common Sense, Dependability, Leadership, Ability to Trouble­ shoot, and Age.

Less highly

related to PayGrade is a

group of traits which may indicate some sort of factor.

adjustment

In this group are Social Adjustment, Neatness of

Appearance, Cooperation, Discipline, Application and Ini­ tiative, Ability to be Taught, and Neatness of Work.

Leng­

th on Board is seen to have a small relationship to most

48 traits, and Education and General Classification Test Score have practically none.

The correlations of the Rating

Scale traits with the Ability Check-List (Line 26, Table II) will be explained in a later chapter.

They should not be

taken at their face value since the effects of pay grade had been removed from the Check-List scores. Intercorrelation of traits, Procedure II (variance within pay grade). Because of the substantial relationship of pay grade to most of the Rating Scale traits, and because of the great range of abilities represented in groups con­ taining everything from Strikers to Chief Petty Officers, new within-pay grade intercorrelations were computed using each subject’s average STEN-score in each trait.^

Thus if

some factor (s) other than pay grade was in part producing the intercorrelations, then the resulting matrices would still contain significant coefficients.

If, however, pay

grade were the only variable operating to produce intercor­ relations, then the matrices would contain only near-zero coefficients. In Tables IV and V it may be seen that reducing the effect of pay grade in the described manner decreased the size of the intercorrelations appreciably, but did not by

4

See Appendix B for example of this procedure.

TABLE IV INTERCORRELATIONS* OF RATING SCALE TRAITS , ABILITY CHECK LIST, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION BASED ON VARIANCE WITHIN PAY GRADE (FIRST SAMPLE) N = 187 Enginemen and Electrician's Mates Trait 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13 . 14. 15 . 16. 17. 18. 19* 20. 21. 22. 23* 24.

1

Age Length on Board Education Social Adjustment Quality of Work Neatness of Appearance Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Rate Gen. Classification Test

25« Check-List ■ffll!!—

",

.'."r-,,l.w r i . r,"nM

'■■

1■■

mmmmmmm—

2

-.02

3

4

5

6

7

8

10

9

11

12

-

*03 -.21 -.01 .02 -.01 .23 .01 .04 -

-.07 -.04 .19 .16 -.04 -.08 .00 -.03 .04 .17 .12 .02 -. 16 .11 .14 .02 -.02 .08 .10 .02 •15 •25 .13 .09 .03 .06 •03 .12 .01 .14 .09 .13 .09 .09 -.11 -.01 .00 .06 -.14 .12 -.03 •07 .16 .16 -.04 -.04 -.08 .12 -.01 •09 •09 .05 •03 •23 -.01 -.08 •07 .00 .01 -•03 ■■■■■!

-

•55 .09 .56 •30 •53 •17 •49 •■25 •39 .41 •33 •53 •19 .48 •37 .48 .41 •50 .42 •13 -.10

-

.40 .55 •53 •63 .54 •55 •70 •52 •51 •77 •73 .58 •77 .68 .68

•38 •57 •37 •50 •45 .44 •52 •53 •38 •36 •49 •39 •47

.63

.29

•58

•39 •03 •07

.62

.18 •32

.46 •59 .20 •78

.59 .62 .64 •43 •67 •38 •59 •57 •51



.65 •32

.62 •63 .68 •57

.67 •65 .68 .46 •59

-

•33 •55 •74 •79 •71 .68 •78 .56 •57

.65

.61

.86

.61 .50 .56 .23

.70 •55 .46

.69

.14

•32

.21

•53 .64 •13 .49

.36 .36

.61 •23 .22 •43 •37 •34 •56 •15 .49 •33 •27 .12

.29

-

.61 .63 .55 .59

-

.65 .64

.63 .69

.66 •52 •58

•50 .42

.62

.60

•58

•63 •59 .41

.69 •43 .66 .24

.16

.69 •23 .40

m m i\*m

* Tetrachoric Correlation Coefficients

4?

VO

TABLE IV (Continued)

Trait 13• 14. 15. 16 . 17. 18. 19. 20. 21 . 22. 23. 24. 25 .

Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Hate Gen. Classification Test Check-List

13

.62 .68 .81 .62 .50 •71 .72 •72 .54 •59

.16 .40

14

15

16

17

18

19

20

21

22

23

24 25

_

.68 .80 .50 .62 .63

•37 •59

•75 .71

.69 .60

.67 .83

.46 •63

•72 •70 •73 .85 .80 .61 .86

.18

.16 -.07

.06

•35

.46

.40

.21

.23 •37

_

.68 .52

-

.64 .71 .72

.67

.67

.57

.72 .59

.56

-

.53

.56

.64 .68 .52 .58 .12 •35

_

.68

_

.60

.67

.79

.74 .20 •43

.05 .41

-

•51 •23 •43

-

.19 .31 .22

VJl

o

TABLE V INTERCORRELATIONS* OF RATING SCALE TRAITS, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION BASED ON VARIANCE WITHIN PAY GRADE .(SECOND SAMPLE) N = 286 Enginemen and Electrician's Mates Trait_________________ 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13* 14. 15 . 16. 17. 18. 19. 20.

Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiation Judgment Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

- .06 .04 .14 .05

.08 -.03 .24 - .06

.13 .33 -.06 .00 - .06

.08 .12

.08 - .06 .45 .02

* Tetrachoric Correlation Coefficients

2

-.19

3

.23

.02

.14 .07 .25 .21 .07 .17

.30

.18 .11 .05 .10 .12 .19 .24 .10 .01 .04

.15 .07 .18 .22 .25 .18 .14 .23 .22 .22 .18 .20 .14 - .14

.28

4

•53 .74 .47 .49

.26 .32 •55 .47 .50 •52 •37 •53 .50

.62 -03 .25

5

6

7

8

9

10

.65 .64 .72 .45 .74 .70 .73

.46 .56 .47 .70 .64 •58 •55

_

.61

.63

‘.37 .54 -59 •59 .58 .70

.63

.61 •63 •57

.50 .60 .69

.78 -.21 .10

.65

•75

-.14 .07

- .06 .06

.69 .74 .74 .72

-

•37 .56

.72 .71 •53 .52 .54 .61

.67 .70 .14 .07

-

.45 .44 .44 .43 .45

.51 .43 •51 .45 -.19 .09

-

.60 .56 .63 .65 .72 •75 .61 .67 - *12 .05

TABLE V (Continued)

Trait

11 . Judgment 12 . Leadership 13. l4. 13. 16 . 17. 18. 19. 20.

Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

11

12

13

14

15

16

17

18

19

20

-

.63 .50 .62 .56 .64 •71

.76 .04 .20

-

.58 .64 .58 .64 .71 .77 -.15 .01

-

.63 •59 .74

.69 •75 -.07

.05

-

.61 .66 .74 .73 -.09

.16

-

.69 •70

.65 -.04 .14

-

.74 .72 .10 -.05

-

.82 -.07 -.05

-

-.11 .14

-

-.08

VJ1

ro

53 any means reduce them to near-zero.

The amount of reduc­

tion in variance can be discerned from the sums of the matrices.

In the first sample, the sum of the original

matrix, Table II (26 variables) was 35^*10.

In the re­

duced matrix, Table IV (25 variables), it was 25^*13* or about 72 per cent of the original matrix.

In the second

sample, the sum of the original matrix, Table III (21 vari­ ables, was 256.18.

In the reduced matrix Table V (20 vari­

ables) the sum was 156 .88 , about 6l per cent of the original matrix.

This greater reduction in total variance due to re­

moving the effects of pay grade in the second sample, sup­ ports the hypothesis that lack of instructions during the second sampling increased the halo due to pay grade, and thus its contribution to the total variance. As testimony to the fact that the within-pay grade variance technique was effectively removing the variance due to pay-grade level alone, the correlations of the Rat­ ing Scale variables with the trait of Age may be examined. In both samples age correlated very highly with pay grade.

(.83 in the first sample, .9^ in the second).

Removing

the effects of pay grade, then, should be tantamount to re­ moving the effects of age, and the correlations of age with the Rating Scale variables should distribute themselves rather closely about zero.

Examinations of the correla­

tions with age in Tables IV and V reveals that In both samples

54

this did indeed occur, over three-fourths of the coeffi­ cients having values between plus and minus .15*

Thus it

appears that the reducing procedure accomplished its pur­ pose . Intercorrelation of traits, Procedure III (pay grade partialled out).

As an additional check on the reducing or

within-pay grade variance technique, however, it was decided to compare these results with those obtained by partialling out statistically the effects of pay grade.

This was ac­

complished by starting with the original matrices and re­ moving the pay grade variance from each coefficient by con­ ventional partialling t e c h n i q u e s . 5

The partialled matrices

resulting may be seen in Tables VI and VII for the first and second samples respectively. In comparing specific corresponding coefficients in the reduced and partialled matrices, considerable variation can be seen.

Apparently the two procedures did not produce

precisely the same effects.

However, a comparison of the

sums of these matrices for both samples yields striking similarities.

In the first sample, the sum of the reduced

5 j. p. Guilford, Fundamental Statistics in Psychol­ ogy and Education (New York: McGraw-Hill Book Company, 194277“pp. 26b-271.

TABLE VI INTERCORRELATIONS* OF RATING SCALE TRAITS, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION, PAY GRADE PARTIALLED OUT STATISTICALLY (FIRST SAMPLE) N = 187 Enginemen and Electrician's Mates Trait 1. Age 2. Length on Board 3 . Education 4. Social Adjustment 5 . Quality of Work 6. Neatness of Appearance 7 . Cooperation 8. Watch Standing 9 . Knowledge of Job 10. Discipline 11. Application and Initiative 12. Judgment and Common Sense 13 . Dependability 14. Adaptability 15 . Leadership 16. Overall Efficiency 17* Neatness of Work 18 . Ability to be Taught 19* Care of Equipment 20. Ability to Troubleshoot 21. Sincerity in Job 22. Manual Skill 23* Overall Efficiency in Rate 24. Ability Check-List 25. Gen. Classification Test

1

2

-.25 .18 - 24

-

3

4

5

6

7

8

9

10

11

12

-.

-.14 .01 -.08 .05 -.29 -.12 .05 .03 .00 -.07 -.10 .12 -.06 .04 .14 .05 •35 -.22 .21 .11 .15 -23

-

-

05 .00 10 -.01 06 .00 01 -.18 02 -.32 07 -.18 04 .28 09 .05

- 08 .38 - 18 .18 12 -. 22 00 .24 08 .14 - 10 .17 34 -.03 04 .10 15 .00 10 .04 - 06 .12 - 03 .24 19 .04 .12 16 -

* Tetrachoric Correlation Coefficients

-

.41 .08 .23 .59 • .57 •55 .53 •37 .62 .28 •35 •53 .67 .51 .66 .54 .74 •59 .58 .58 •59 .63 .71 .43 .69 •38 .66 •39 .62 .47 .66 .54 .67 .43 •53 .49 .62 .09 .44 -.07 -.20 -

-

.22 •31 .01 •37 .21 .22 •37 .20

.18 .34 .29 .27 .43

.28

.64 •37 .37



.42

.89

.62 .62

.56

.54

.63 .60

.61 .67

.52 .65 •59

.42 .72 .58

.65 .58 •50 .72 .46

.65 •72 .34

.69 .19 .09 .23 •33 .67 .54 .03 .30 .41 .00 -.08 -.12

-

.12

.65 •77 .56 .63

.60 .65 .47

.69

-

.45 .41 .41

.66 •79

.26

.62

.47 .40 .22 .46

.66

.63

.67 .69

•78 .71 .70 .69 .51 •72 .49 .75 .42 -.20

•55 .61 .53 .26 .59 .45 .44 .22 .58 .43 .49 .26 -.05 -.04

-

•75 .64 .83 .94 .48

.85 .19 .00

-

.67 .69

ui ui

TABLE VI (Continued)

Trait 13. 14. 15 . 16. 17 . 18. 19. 20. 21. 22. 23* 24. 25*

Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Hate Ability Check-List Gen. Classification Test

13

14.

15

16

17

18

19

20

21

22

23

24

25

-

.64 .64

-

.60

-

.81 .84 .67 .76 •73 .51 .78 .84 .50 .64 .50 .48 .46 .76 .54 .77 .72 .63 .56 •57 •39 .72 .80 •59 •59 .44 .42 .00 -.09 -.19

-

•73

.80

-

•76

-

.74 .62 .66 •78 •67 .80 •85 •72 .69 .60 •76 .69 •77 .84 .81 .48 •37 .24 .02 -.21 -•15

-

.48 •70 .78 .40 .54 .62 •58 .77 .80 .64 •29 .37 •58 •33 .40 .00 -.37 -.18 -.03 -.19 .04 -

-

-

-

VJ1

CTn

TABLE VII INTERCORRELATIONS* OF RATING SCALE TRAITS, NAVY G.C.T. AND BIOGRAPHICAL INFORMATION, PAY GRADE PARTIALLED OUT STATISTICALLY (SECOND SAMPLE) N = 286 Enginemen and Electrician's Mates Trait 1. 2. 3. 5. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17* 18. 19. 20.

Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

1

2

3

4

5

6

7

8

9

10

-.01

.17 -.24

-.16 -. 16 .03 .00 -.24 .09 .06 -.32 -.07 .10 .13 .05

-.27 .03

.02

-

.18

.06

*56

-

.10 .12 .24 -.02 .17 *15 .11 .12 -.01

.14 .11

.65

*72

*57 •52 *32 .56 *36

.80

*79

.60

.80

*70 .60 .48 .68 .57 .68

*55 *52 .58 .60 .41 •35 -*35 • -03

*77

.58

-.08 - . 16

.18 .06 .16 .18

.63 -. 28

-.14 -.04

-

.16 .21 .11 -.11 .07

.16 .20 .04 .09

.08 .04 -.11 .27

.77 .54

.80

.81 *77 *77 *73 *77 -.11 -.08

*75 .68 .70 •53 .64 -.20 -.10

-

.71 .52 .72 .78 .69 .77 .76 .76 .74 .73 .74 -.20 *03

-

•37 •69 •77 .84 .68

.60 •76 •70 •77 •79 •07 -.13

-

.41 47 •34 •32 .54 •49 .46 •34 .48 -.64 •

.08

-

.63

.58 •70 .66 •74

.67 •58 •57 -.08 - .06

* Tetrachoric Correlation Coefficients

vjn

-0

TABLE VII (Continued)

Trait 11. 12. 13. 14. 15. 16 . 17. 18 . 19. 20.

Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

11

12

13

14

15

16

17

18

19

20



.74

.60 .80 .68

.62 .80 .76 -.05 -.12

-

.76 .68 .64 .68 .79

.81 -. 16 -.02

-

.68 •74 •71

.60 .62 •03

-.03

-

•75 .66 .72

.69 -.02 .14

-

.77

.65 •73 .02 .14

-

.67 •71 .02 .13

-

.89 -.03 .04

-

-.20 -.02

-

-.49

mm

UI

00

59 matrix, Table IV was 25^*13 and that of the partialled matrix, Table VI, was 25^.41,

In the second sample, the

sum of the reduced matrix, Table V, was 156.88 while that of the partialled matrix, Table VII, was 1 6 6 . ^ .

Thus the

two methods removed approximately the same amount of vari­ ance in each sample, but they treated individual coeffi­ cients somewhat differently.

The only explanation for this

phenomenon must lie in the fact that what was accomplished precisely by mathematics was dependent on the correlation of pay grade with the other variables derived from the en­ tire sample, while in the reducing procedure each individ­ ual case required a decision as to who was high and who was low within each pay grade.

In other words, if the same

amount of variance were due to pay grade in each individual case, then the partialling procedure should correspond very closely to the reducing procedure.

It is believed that this

was not the case in every instance and thus discrepancies between the matrices resulting from the two procedures were to be expected. As a final comment on the partialling procedure, it should be pointed, out that the correlations of Age with the Rating Scale variables again distributed themselves about zero as they did in the reducing procedure.

Again, over

70 per cent of the correlations with age were within the range of plus or minus .15*

There were a few more large

60 coefficients than with the reducing procedure, however, and a larger number of negative coefflcients--a fact which will be discussed again when the results of the factor analyses are presented. The discussion on the reducing and partialling pro­ cedures should hot be concluded without recognizing that these techniques probably removed too much variance.

That

is to say, if many of the qualities under consideration are genuinely related to pay grade, then removal of the effects of pay grade eliminated real, as well as spurious, variance. Evidence that this was the case occurred particularly during the analysis of the second sample of data.

When pay grade

was partialled from this matrix, a substantial number of negative correlations arose between the Rating Scale traits and such variables as Age, Length in Service, and scores in the General Classification Test.

This, in turn, demanded a

reflection of those variables during the factoring procedure, which led eventually to a few rather substantial negative factor loadings for these variables.

(See Table XIII)

These loadings cannot be interpreted in the usual sense, and are regarded as artifacts of the reducing and partialling procedures. Results of the factor analyses.

Having computed the

intercorrelation of traits under three conditions for each

61 of the two samples, six factor analyses were next performed using the Thurstone method. was two-fold:

The purpose of these analyses

(1) to determine the extent and nature of

the factor structure of the Rating Scale under each of the three conditions of correlation, and (2) to determine how similar the factor structure"was from one sample to the next. Extraction of factors was continued as long as the cross-product of any pair of factor loadings exceeded the standard error of the zero-order coefficient between the corresponding pair of traits in the original matrix.

For

the sake of consistency and in order to introduce a minimum of assumptions, all rotations were orthogonal and the crite­ ria for rotation were positive manifold and simple structure. Rotations were accomplished according to a graphic method 6 described by Zimmerman. In assembling the traits which comprised the Rating Scale, it was felt that there were temperamental components of performance aboard submarines which were of as much importance as technical knowledge and skills.

Theoretically,

at least, the temperamental components should be independent

^ W. S. Zimmerman, !IA Simple Graphical Method for Orthogonal Rotation of Axes,” Psychometrika, 11:51-55* 19^6.

62 of the technical components, although for any given popula­ tion they might be correlated.

For this reason, orthogonal

rotations were utilized which, in fact, gave very satis­ factory solutions for the original data from both samples. For the sake of determining whether or not the factor struc­ ture was altered by the reducing and partialling procedures, orthogonal rotations were employed in those analyses also. Here the results were not nearly as satisfactory from the standpoint of identifying factors.

It may be that an

oblique solution would result in more meaning for these sets of data.

An investigation of this possibibility would be

desirable, but was not attempted in this study because of the prohibitive amount of labor involved. In Tables VIII, IX and X the results of the factorial analyses performed on the data from the first sample may be seen.

In Tables XI, XII, and XIII are the corresponding

results from the second sample.

The most remarkable feature

of these results is the fact that the factor structure in­ creased in dimensionality when the variance due to pay grade was removed.

In the first sample, there were four factors

before the effects of pay grade were removed, five factors in the 'reduced data, and six in the data in which pay grade was held constant by partial correlation.

In the second

sample this increase was from three to five and seven fac­ tors for the matrices from which pay grade was removed.

No

63

TABLE VIII FACTOR LOADINGS AFTER ROTATION AND COMMUNALITIES BEFORE AND AFTER ROTATION (FIRST SAMPLE, N = 187, PAY GRADE ACTING AS A VARIABLE) Factor

I

Trait 1. Pay Grade 27* 2. Age 22 3. Length on Board -06 4. Education -11 60 5. Social Adjustment 6 . Quality of Work 56 7. Neatness of Appearance 34 8. Cooperation 79 9- Watch Standing 32 10. Knowledge of Job 40 11. Discipline 51 12. Application and Initiative 77 13. Judgment and Common Sense 58 64 14. Dependability 13 . Adaptability 73 16 . Leadership 52 17- Overall Efficiency 69 42 18. Neatness of Work 19 . Ability to be Taught75 20. Care of Equipment 67 48 21. Ability to Troubleshoot 22. Sincerity in Job 69 42 23. Manual Skill 24. Overall Efficiency in Rate 48 -08 25* Gen. Classification List 21 26 . Ability Check-List

II 97 75 38

-12 08 31 25

* Decimal points omitted p ** h{j[ - communalities before rotation - communalities after rotation

10 44 78 14 14 64 59 45 69. 34 27 35 37 63 38 56 37

28 08

IV

h 2»* u

h 2**

07

06

1.04

-10

15

66

38 * 11 02 39 07 17 56 34 ,40 -02

31 17 41 83 35 74 83

1.03 64 31

III

16

28

18

4l 83 34 74

24

55

26

23

90

82 89

02

49 77 91 91 86 86 90 88

49 77 91 90 85 86 91

43 29 13 27 27 19 32 52 29 50 30 29 22 50 07

08

26 37

26

22 27 46 59 21 10 37 45 44 57 12 42

82 85

85 91 74 94 10 24

87 81 85 85 91 74 94

10 23

TABLE IX FACTOR LOADINGS AFTER ROTATION AND COMMUNALITIES BEFORE AND AFTER ROTATION (FIRST SAMPLE, N = 187, BASED ON VARIANCE WITHIN PAY GRADE) I

Factor 1. 2. 3. 5. 5. 6. 7* 8. 9. 10. 11. 12. 13 . 14. 15. 16. 17* 18. 19* 20. 21. 22. 23. 24. 23*

Trait Age' Length on Board Education Social Adjustment Quality of Work Neatness-of Appearance Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Rate Gen. Classification Test Ability Check-List

13* -19 -07 57 30 -02 56 35 -02 -08 -05 19 05 32 -02 15 52 20

28 -04

26 20 23 -22 48

* Decimal points omitted ** hS - communalities after rotation - communalities before rotation

II

III

IV

V

-21

05 19

07 07

-08

-18

32 32

41 79 11 50 32 55 23 27 23 33 68 54 53

30 03 15 -03 16

-11 32 02 23 46 30 46 37

60 42 53 41 48 50 47 33 07 43 30 46 22 54 27

26

18 20 19 51 30 48 25 49 19 15 55 59 55

28 78 25 52 24 01 -07

08 34 63 46 35

18 62 05 25

38 15 36 36 -12 45 51 58 13 35 32 27 40

18 23 58 52 09 24 19

,h2** u

h2** “r

16

16

09 17 59 88 41 71

09 17 59 89 40 71

61 82 52

69 69 86 83 69 92

60 82 52 69 69 86 84

69 93

81

81

62 74 98 80 66

62 77 98

80 18

80 18

40

40

81 66

TABLE X

FACTOR LOADINGS AFTER ROTATION AND COMMUNALITIES BEFORE AND AFTER ROTATION (FIRST SAMPLE, N = 187, PAY GRADE PARTIALLED OUT) Factor

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15 . 16 . 17. 18. 19* 20. 21. 22. 23• 24. 25.

Trait Age Length on Board Education . Social Adjustment Quality of Work Neatness of Appearance Cooperation Watch Standing Knowledge of.Job Discipline Application and Initiative Judgment and Common Sense Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Efficiency in Rate Gen. Classification Test Ability Check-List

I 4l* 00 -14 40 14 -01 55

28 22 -10 59 45 29 37 05 59

26

II

33 -28

62 -06 26

18 01 05 -03 31 -03 42 36 -17 31 12 28

-02 09 44 26 10

-06 28

26

16

22 -01

10 21

* Decimal points omitted ** hS - communalities before rotation h^ - communalities after rotation

01 03 15

III -30 10 -13 24 54 22 35 02 39 44 52 50 72 74 52 52 34 79

62 79 45 64 75 -37 35

IV

V

VI

h2** u

h2** r

00

08 -16

-03 44 14 44 09 10 07 57 -14 13 -09 05 32 02 05 30 34 29 25 25 35 30 01

37 32 53 52 65 31 73 90 68 57

38 31 53 50 66

-07 23 28 40 -12 53 41 48 01 64 47

16 48 4l 36 67 45 11 03

61 28 42 -20 40

-20 09 33 4l 13 57 49 51 13

06 04 01 29 22 11 38 45 23 39 -14 20 -03 31

1.06 84 88 91 66 93 84

28 73 89

69 58 1.05 85 86 94

62 89 83

1.06

1.06

75 94 91

75 94 92 63 87 31 45

62 90

28

32

-19

46

66

TABLE XI FACTOR LOADINGS AFTER ROTATION AND COMMUNALITIES BEFORE AND AFTER ROTATION (SECOND SAMPLE, N = 286, .PAY GRADE ACTING AS A VARIABLE) Factor 1. 2. 3• 4. 5. 6. 7. 8. 9. 10. 11. 12. 13* 14. 15. 16. 17. 18. 19. 20. 21.

Trait Pay Grade Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length of Service Gen. Classification Test

I

II

III

99* 95 05 -22 44 74 56 58

04 02 04 19

14 10 46 41 01 14 21 22

80 32 59

80 78 73 45 68 66 73 72 92 32

* Decimal points omitted ** h, - eommunalities before rotation 2 hr - eommunalities after rotation

62 60

hg**

1.00 90 22 25 5893

1.00

91 22 25 58 94

81

82

87 95 53 78 93 92

85

21

84 88 86 83 88 88

88 96 53 78 93 92 85 85 88 86 83 88 88

-26

18

18

68 70 42 59 65 4l 46 55 75 64 64 54

10 35 33 15 30 11 14 05

60

06

-02 09

38 27

TABLE XII FACTOR LOADINGS AFTER ROTATION AND COMMONALITIES BEFORE AND AFTER ROTATION (SECOND SAMPLE, N =286, BASED ON VARIANCE WITHIN PAY GRADE) I

Factor

1. 2. 3* 5. 5. 6. 7. 8. 9. 10. 11. 12. 13 . 14. 15* 16 . 17. 18. 19* 20.

Trait Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

-21* -02 01

60 79 77 46 58 48 72 34 57

60 65 66 58 78

62 -07 10

* Decimal points omitted ** h§ - eommunalities before rotation - eommunalities after rotation

II

14 32 -01 64

18 31 38 36

-03 23 52 20 10 17 14

26 18 41 -56 21

III

70 03 -13 10 17

18 16 48 18 17 48 31 15 40 08 00

16 20 28 -07

2** hr

IV

V

02 20 55 00 34 02 54 02

-16

57

03

14 39 79 86 74 67 73 38 76 86 68 64 85 67 84 84 86 44 27

16 42 30 46

28 43 41 29 15 50 -07 -24

28 -01

26 16 06 20 31 02 40 07 42 25

18 61 38

16 21 39

58 14 39 78 87 75 67 74 39 77 86 68

65 86 66

85 85 87

45 27

TABLE XIII FACTOR LOADINGS AFTER ROTATION AND COMMUNALITIES BEFORE AND AFTER ROTATION (SECOND SAMPLE, N = 286, PAY GRADE PARTIALLED OUT) Factor

I

II

III

IV

V

VI

-24 -23 -22 40 24 41 23 31 04 34 42

20 -14 -13

-26

18

62

46 01 23 34 48 42 -21 -37

42 37 54 49 10

VII

h 2**

p** hr

73 39 30 58 91 75 83

74 40 30 57 90 75 82 90 66 74 88 95 76 93

u

Trait 1. 2. 3. 4. 3. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15* 16. 17* 18. 19• 20.

Age Length on Board Education Social Adjustment Quality of Work Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Leadership Neatness of Work Ability to be Taught Care of Equipment Sincerity in Job Overall Efficiency in Rate Overall Efficiency Length in Service Gen. Classification Test

-07* 23 05 35 73 53 58 72

18 62 33 45 57 88 67 55 52 55 -17 -09

34 -03 31

-61

16

16

38 33

09

29

16

01

09 -01 55 -01 -01 24 -01 03 -05 00

13 42

09 21 64 53

06 04 39 47 10 -01 01 00 47 45 -51 -03

18 44 31 54 31 50 20 49

26 -40 -02

-15 02 31 12 07 31 13 27

26 36 02 14 03 01 36 36 27

08 27

-30 32 17 34 11 4l 14 00 05 36 27 23 12 15 31 09

28 14 23 23

89 66 74 88 95 76 9^ 90

81

89 80

93 89 98 55

94 89 97 55

* Decimal points omitted ** h^ - eommunalities before rotation - eommunalities after rotation CT\

oo

69 obvious explanation for this phenomenon occurs to the writ­ er.

One reasonable hypothesis might be, however, that in

the first analysis of each sample, pay grade acted as a general factor and increased the correlation between fac­ tors to such an extent that two or more factors emerged as on e . Identification of factors.

The most readily iden­

tifiable factors were extracted from the original matrices of both samples.

In both these analyses a factor of Tech­

nical Competence emerged which was practically synonomous with pay grade.

The traits having the highest loadings in

these factors are listed below as a suppliment to the tables already presented. First Sample:

Technical Competence Pay Grade Knowledge of Job Age Leadership Judgment and Common Sense Ability to Troubleshoot

Second Sample:

.97

.78 .75

.69 .64

.63

Technical Competence

Pay Grade Age Length in Service Judgment and Common Sense Knowledge of Job Leadership Quality of Work Overall Efficiency in Rate

-99 .95 .92 .80 .80

.78 .74 .73

70

In both original analyses a factor emerged which seemed best identified as Personal Adjustment of some kind Involved in it were attitude toward the job and shipmates, effort, sincerity, dependability, etc.

The traits having

the highest loadings in this factor for each sample are listed below: First Sample:

Personal Adjustment Cooperation Application and Initiative Ability to be Taught Adaptability Sincerity in*the Job Overall Efficiency Care of Equipment Dependability

Second Sample:

.79 .77 .75 .73

.69 .69 .67 .64

Personal Adjustment

Ability to be Taught Watch Standing Cooperation Application and Initiative Care of Equipment Sincerity In the Job Social Adjustment Quality of Work Overall Efficiency

.75

.70 .68

.65 .64 .64

.62 .60

.60

Also identified from the original data of the first sample was a factor which appears best described as Care­ fulness or Neatness in Work and Person.

Traits having the

highest loadings were: First Sample:

Carefulness or Neatness Quality of Work Neatness of Work

.56 .52

71 Care of Equipment Overall Efficiency in Rate Discipline Neatness of Appearance

.50 .50 .4*5 .40

This rather well-defined factor was not identified in the analysis of the data from the second sample. The fourth and final factor emerging from the orig­ inal data of the first sample appeared to be some sort of Efficiency or Job Performance factor.

The traits with

principal loadings were: First Sample:

Efficiency or Job Performance Neatness of Work Overall Efficiency in Rate Watch Standing Overall Efficiency Sincerity in the Job Manual Skill Performance Check-List Ability to Troubleshoot Judgment and Common Sense Quality of Work

.59 .57 .55 .46 .45 .44 .42 .37 .37 .34

The third and final factor emerging from the original data of the second sample was found only in that sample.

It

seemed best identified as a Maturity factor: Second Sample:

Maturity

Length on Board Education Knowledge of the Job Judgment Leadership

.46 .41 .38 •35 .33

The factors extracted from the reduced and partialled

72

matrices were very difficult to identify.

Traces of the

factors extracted from the original data, and listed above, can be seen throughout the lists below.

The loadings, of

course, are v e r y .different due to the removal of pay grade variance.

An attempt has been made to classify the factors

into two broad groups--those of a Technical nature and those of an Adjustment nature.

Even with such a broad classifica­

tion, some factors seemed to be as much a member of one class as of the other. First Sample:

Technical

Adjustment Discipline Overall Efficiency in Rate Judgment and Common Sense Leadership Adaptability Overall Efficiency Sincerity in the Job Neatness of Appearance Match Standing Care of Equipment Application and Initiative Dependability

Reduced Data

.60 *54 .53

.50 .48 .47 .46 .46 .46

Ability to Troubleshoot Neatness of Work Ability to be Taught Overall Efficiency Manual Skill Knowledge of Job

.78 •59 •55 •55 .52

.51

.43 .42 .41

Adjustment and Technical (Perhaps interest in work) Sincerity in the Job .58 Dependability .58 Manual Skill .52 Judgment and Common Sense •51 Application and Initiative .45 .40 Ability to be Taught .38 Neatness of Appearance .36 Watch Standing Leadership •35 .30 Age

73 Adjustment Social Adjustment Cooperation Neatness of Work Ability Check-List Watch Standing Adaptability Quality of Work

Technical •37

.56 .52 .48 .35 .32 .30

First Sample:

Partialled Data

Adjustment Application and Initiative Overall Efficiency Cooperation Judgment and Common Sense Ability to Troubleshoot Age Social Adjustment Adaptability

Technical •59 •59 •55 .45 .44 A

1

Adjustment

.62 A 2

.36 .33 .31 .31

Neatness of Work Application and Initiative Sincerity in the Job Cooperation Adaptability Knowledge of the Job

.67 .64

.61 .53 .48 .48

Adjustment

Adjustment Watch Standing Discipline Knowledge of the Job Care of Equipment Neatness of Appearance Sincerity in the Job

Ability to Troubleshoot *79 Ability to be Taught •79 Overall Efficiency in Rate .75 .74 Adaptability •72 Dependability .64 Manual Skill

.40 •37

Maturity Education Judgment and Common Sense Dependability Age Discipline Leadership

Quality of Work •79 .68 Adaptability Care of Equipment .63 Overall Efficiency in Rate .62 Knowledge of the Job .55 Leadership .54 Overall Efficiency •53 Cooperation •50

•57 .51

A9. .45 .41 •39

Watch Standing Length on Board Social Adjustment Sincerity in the Job Neatness of Work

•57 .44 .44 •35 •34

74 Second Sample:

Reduced Data

Technical

Maturity

Quality of Work Overall Efficiency in Rate Cooperation Application and Initiative Care of Equipment Ability to be Taught Neatness of Work Sincerity in the Job Leadership

•79

.78 •77 .72 .66

.65

Age Knowledge of Job Judgment and Common Sense Ability to be- Taught Leadership Length in Service

.70 .48 .48 .40 •31

.28

.58 .58 •57

Adjustment

Adjustment .64 •52 .41 •38 .36 .32

Social Adjustment Judgment and Common Sense Overall Efficiency Watch Standing Knowledge of Job Length on Board

Education Watch Standing Overall Efficiency Leadership Ability to be Taught Application -and Initiative

•55 .54 .51 .46 .43 .42

Adjustment Sincerity in the Job Neatness of Work Judgment and Common Sense General Classification Test Overall Efficiency in Rate Discipline Second Sample: Technical Ability to be Taught Quality of Work Knowledge of Job Care of Equipment Application and Initiative Watch Standing Neatness of Work Overall Efficiency Sincerity in the Job

.61 .42 .40 .39 .33 .31

Partialled Data Maturity

.88 .73 .72

.67 .62

.58 .57.53 .55

Length in Service .64 Discipline .55 General Classification Test.53 Age .34 Education .31 Leadership .24

75 Technical

Adjustment

Leadership Overall Efficiency in Rate Overall Efficiency Knowledge of the Job Judgment and Common Sense Length on Board Education

.47 .47 -45 .42 .39 .38 .33

Overall Efficiency in Rate Neatness of Work Overall Efficiency Judgment and Common Sense Cooperation Social Adjustment Sincerity in the Job Application and Initiative Adjustment

Adjustment Leadership Watch Standing Care of Equipment Discipline Judgment and Common Sense Sincerity in the Job Quality of Work Neatness of Work

.48 .46 .42 .42 .41 .40 .34 .34

.62 .54 .54 .50. .49 .49 .44 .42

Overall Efficiency in Rate Sincerity in the Job Judgment and Common Sense Watch Standing Social Adjustment

.36 .36 .36 .31 .31

Adjustment Cooperation Application and Initiative Social Adjustment Length on Board Care of Equipment

.41

.36 .34 .32

.31

The difficulty in identifying these last groups of factors probably rests largely with the fact that too few ex­ perimental variables were included for identification of the large number of factors, which theoretically might be rated. To this difficulty may be added the fact that the Rating Scale traits were not sufficiently definitive to permit a distinction between two factors which were comprised in large part of the same traits.

For example, in the field of

aptitudes, identification of a factor in which many tests

76

have loadings can be made with considerable confidence if a relatively pure test such as number operations or vocabulary is highly saturated with the factor.

In the present analy­

sis, however, one would hesitate to identify a factor as Cooperation, for example, simply because the trait called Cooperation had the highest loading in that factor. Identification of factors extracted from the reduced and partialled matrices was increased in difficulty also by the fact that these procedures removed a good proportion of genuine variance as well as spurious variance.

This left

many of the traits with reduced loadings in certain factors, and decreased differences between traits which should have had heavy loadings and those which had less significant loadings.

This resulted in less distinct patterns of load­

ings which-were difficult to identify.

CHAPTER VI

AGREEMENT OF THE RATERS In this chapter the extent to which the raters agreed with one another on the various traits will be dis­ cussed.

This characteristic has been defined in this study

as the objectivity of the scale.

It is in this quality that

subjective measuring devices are often most lacking. Inter-rater agreement was determined by correlating one rater’s judgments on each trait with those of another rater who had rated the same subjects.

Thus if a pair of

raters both considered a given ratee either above or below the median of the group on a certain trait, it constituted an agreement.

If, on the other hand, one rater considered

the man above the median and the other considered him below It, a disagreement was recorded.

By considering the re­

gression of each rater on each other rater, a four-fold table of frequencies representing agreements and disagree­ ments was built up for each trait in the scale and for average score.

From these four-fold tables tetrachoric

correlation coefficients were computed which gave indices of agreement. The question soon arose as to just what role was being played by pay grade in determining the magnitude of these

78 correlations.

For example, it seemed a reasonable assump­

tion that most raters would generally agree that a FirstClass Petty Officer was better than a Third-Class Petty Officer or a Striker.

Thus the factor of pay grade alone

would create considerable agreement among raters.

It was

considered necessary, therefore, to determine whether there was any basis for agreement among raters other than the factor of pay grade.

This was accomplished by reduc­

ing the effects of pay grade in a manner similar to that for determining the within-pay-grade trait intercorrela­ tions.

Each rater's subjects were grouped by pay grade.

On

each trait these pay-grade groups were divided into upper and lower sub-groups with the division at the median.

(if

there was only one subject in any given pay grade, he was omitted from the calculation.)

Those subjects in the upper

sub-group were assigned a passing score (+) while those in the lower sub-group received a failing score (0). With these scores assigned for each rater for each trait for each pay grade, the intercorrelations of raters' judgments was again a matter .of plotting a four-fold table of frequencies.

All pay grades were plotted on the same

table and each subject appeared in the table as many times as there were pairs of raters who judged him. This latter coefficient may be termed a within-paygrade coefficient.

The results for each trait, under each

79 of the two conditions, and for each of the two samples, are shown in Table XIV.

In addition to the agreement on the

single traits, the agreement on average scores is indicated. The values for the traits of Neatness of Appearance, De­ pendability, Adaptability, Ability to Troubleshoot, and Manual Skill do not appear for the second sample since those traits were eliminated from the second form of the scale. Several of the results reported in Table XIV require comment.

There is considerable variability in the size of

the coefficients from trait to trait as might be expected. Considering both samples and both conditions of correla­ tion, the trait of Social Adjustment seems to be about the least objective.

This is reasonable in view of the obvious

difficulties involved both in defining Social Adjustment and in agreeing on objective indicators for such a condition. Among the traits having the highest degree of objec­ tivity under both conditions of correlation were Knowledge of the Job, Discipline, Judgment and Common Sense, and Leadership.

This, too, was in line with expectations, since

behavioral referents for these traits are much easier to de­ fine and their manifestation easier to agree upon. The factor of pay grade seemed to have an unpredic­ table effect in raising or lowering the inter-rater agree­ ment.

This was true at least in the first sample in which

removal of pay grade variance caused the majority of the

80

TABLE XIV INTER-RATER AGREEMENT* ON THE VARIOUS RATING SCALE TRAITS AND ON AVERAGE SCORE, FOR BOTH SAMPLES, WITH PAY GRADE IN AS A VARIABLE AND WITH PAY GRADE HELD CONSTANT Trait

1. 2. 3. 4. 5-

6. 7« 8. 9. 10. 11. 12. 13* 14. 15• 16. 1718. 19* 20.

Social Adjustment Quality of Work Neatness of Appearance Cooperation Watch Standing Knowledge of Job Discipline Application and Initiative Judgment and Common Sense Dependability Adaptability Leadership Overall Efficiency Neatness of Work Ability to be Taught Care of Equipment Ability to Troubleshoot Sincerity in Job Manual Skill Overall Proficiency in Rate Average Score

First Sample Second[ Sampl (N=286) (N=l87) Pay Pay Pay Pay Grade Grade Grade Grade In In Out Out •32 .26 .46 .48 .56 • 55 •33 •39 .42, .42 .50 •30 .58 •57 .44 .60 .45 •59 .66 .78 .61 .50 .62 .60 .58 .87 .60 .54 .49 •59 .61 .46 .69 .67 .47 .43 .63 •53 .60 •72 .56 •55 .62 .54 .65 .51 .44 .64 .48 .51 .42 .42 •55 •53 .61 .42 •55 •35 .60 .54 .44 .44 .60 .60 .58 .45 .66 .40 .63 .57

.67

* Tetrachoric Correlation Coefficients

.73

-

-

-

-

-

-

-

-

-

-

.74

•59

coefficients to increase, while a few remained the same and a few decreased in size.

In general the increase in

objectivity occurred with traits which are not so highly related to pay grade. Thus it may have been that instruc­ ts^ tions designed decrease inter-trait correlations and re­ duce halo were somewhat effective with the first sample, and this fact showed up in the inter-rater agreement.

It

was apparent, in scoring the ratings, that pay grade was affecting various raters differently.

Some were overcome

by its influence and rated strictly in accordance with pay grade level on every trait.

Others remained more ob­

jective, their ratings in many traits showing relatively little relationship to pay grade.

If both kinds of raters

picked the same men within each pay grade level as best and poorest, then removal of the effect of pay grade should in­ crease inter-rater agreement.

It is believed that this is

what happened in the first sample. In the second sample, however, the general effect of removing pay grade variance was to reduce the inter­ rater agreement.

It is believed that the reason for this

reversal of the earlier results was due again to the in­ structions (this time the lack of them) given to the raters. Since practically no personal contact was had with the raters during the second administration of the Rating Scale, and since many of the raters in the second sample had not

82 participated in the first ratings, it is reasonable to hypothesize that the logical and halo effects created by the presence of pay grade were greater in the second sam­ ple than in the first.

This hypothesis is substantiated

by the fact that the correlations of pay grade with other Rating Scale variables was higher in the second sample than in the first, and also by the fact that trait inter­ correlations were higher in general for the second sample. It is suggested, then, that pay grade variance did con­ tribute to the magnitude of the inter-rater agreement in the second sample, and its removal caused a reduction in agreement. In concluding the discussion of the objectivity of the Rating Scale, a word is in order about the level of agreement obtained.

In general the agreement is more sub­

stantial than that found in using conventional rating scale formats both on individual traits and on average score. There is no particular justification for including average score in this report, since the relationship of the vari­ ous trait-scores to each other and to the total score has not been studied.

Undoubtedly exclusion of some of the

less objective traits would Increase the agreement on aver­ age score appreciably.

CHAPTER VII

THE RELIABILITY OF THE RATING SCALE AND SOME INDICATIONS OF VALIDITY In this chapter the method is described for determin­ ing the reliability of the rating scale, and the correla­ tions of each trait with the total score on a performance check-list are reported.

I.

RELIABILITY OF THE SCALE

An important feature of any evaluating instrument is the consistency with which it measures.

Few reports on

rating scales of the traditional type indicate satisfactory reliability, and in the majority of studies, reliability in the re-rate sense is not even reported.

Reliability in this

sense is intra-rater agreement from one rating period to the next.

In many studies, inter-rater agreement, or what has

been called in this dissertation, objectivity, is reported as reliability but this is not reliability in the usual sense of the term. One hundred fifteen subjects were common to the first and second samples which had the same rater (s) in both ad­ ministrations of the scale.

These raters were a non-select-

ed group from six submarines, being singled out solely

84 because they had participated in both samples.

No raters

who had so participated were omitted from the study.

The

elapsed period of time between the first and second ratings varied from five months to as much as nine months for the various raters. Since fairly substantial reliability would be expect­ ed due to the influence of pay grade alone, it was decided that it should be calculated on a within-pay-grade basis. That is, for each pay grade level it was determined whether a given rater placed the same men high and low on the basis of total sGore on his second evaluation as he did on his first.

The self-agreements and disagreements were then plot­

ted in a four-fold table as in the earlier correlational procedures, and the tetrachoric coefficient was determined to be .88 for the 115 cases.

This value compares very well

with those usually obtained with more objective devices, such as psychological tests, and is considerably higher than the reliabilities usually attributed to rating scales (.60 to .70).

II.

RELATIONSHIP TO OTHER MEASURES OF PERFORMANCE

The validation of the Rating Scale has not been at­ tempted in any systematic way at this time, since it was developed as one of the initial phases in the overall pro­ gram to develop performance criteria of naval personnel at

sea.

It was hoped that Rating Scale scores could have

been correlated with independent scores on performance tests being developed under this same contract.

Unfor­

tunately the performance tests still are being developed at this writing.

However, correlations of the Rating

Scale traits with a third measure, that of a Performance Check-List, (which contained specific objective items of a performance or job-knowledge nature to be answered “can do" or "cannot do"), were calculated on the first sample of 187 Electrician's Mates and Enginemen. These intercorrelations are shown in Table XV. Generally, the best agreement was found between those Rat­ ing Scale traits which were intended to measure technical skills and knowledge.

Since the Performance Check-List

was made up entirely of items covering performance of tech­ nical tasks, this agreement is both desirable and to be expected.

Two reservations must be made, however, in in­

terpreting the results found in Table XV.

The first is

that there were some raters in common to both the Rating Scale and the Performance Check-List.

This fact would

create a boot-strapping effect and tend to raise the cor­ relations.

The other factor which makes interpretation

more complex is that the variance due to pay grade has been removed from these results.

This would tend to de­

crease the magnitude of the correlations appreciably

86

TABLE XV RELATIONSHIP BETWEEN PERFORMANCE CHECK LIST (FORM RF 107) TOTAL SCORES AND SCORES ON INDIVIDUAL TRAITS OF THE RAT­ ING SCALE (FORM RF 101), GENERAL CLASSIFICATION TEST SCORES, AND SELECTED BIOGRAPHICAL INFORMATION, WITH THE INFLUENCE OF PAY GRADE HELD CONSTANT N = 187 Electrician*s Mates and Enginemen Variable

Correlation (Tetrachoric) with Check-List Scores

1. Social Adjustment 2. Quality of Work 3. Neatness of Appearance 4. Cooperation 5. Watch Standing 6. Knowledge of Job 7* Discipline 8. Application and Initiative 9 . Judgment and Common Sense 10. Dependability 11. Adaptability 12. Leadership 13.. Overall Efficiency 14. Neatness of Work 13•“ Ability to be Taught 16. Care of Equipment 17* Ability to Troubleshoot 18 . Sincerity in Job 19. Manual Skill 20. Overall Efficiency in Rate 21. General Classification Test 22. Age 23• Length on Board 24. Education * Significant at the 5 per cent level ** Significant at the 1 per cent level

-.10 .32** .07 .14 .32** . 49** .29*

.16 .40** .40** .37** •35** .46** .40** .21 •35** .41** .43** .43** .31** .22 .08 .01 -•03

87 because of the substantial relationship of pay grade to both the Rating Scale scores and those on the Check-List, particularly the latter.

Fifteen of the twenty Rating

Scale traits, then, had significant or very significant relationships to Performance Check-List scores based on variance within pay grade.

Although these results are

not sufficient to draw any conclusions about the validity of the scale, it is felt that they are indicative of promise.

CHAPTER VIII

SUMMARY AND CONCLUSIONS This chapter contains a summary of the procedures used in the study and of the results obtained from them. The final section presents the conclusions reached and some suggestions for further research.

I.

SUMMARY

The recent movement to develop more adequate crite­ ria of performance has resulted in refinement of conven­ tional rating scales, development of new techniques such as performance check-lists and forced-choice techniques, and has utilized more frequently the basic psychological scaling methods of ranking, paired comparisons, and equalappearing intervals.

Generally speaking, most of the

newer techniques have been developed in an effort to reduce some or all of the objections usually raised to convention­ al rating devices.

These have been summarized by Richard­

son1 as follows: 1.

Most men are rated too high by conventional

1 Richardson, op. cit.

89 plans.

The tendency to over-rate increases every year

that a graphic or variant of the graphic system is in operation. 2. theresult

Some raters over-rate more than others, with that their ratings are not comparable with those

made by other supervisors or executives. 3.

Overall or summary ratings bring out few real

differences between men. 4.

Actual analyses of the job behavior expected by

a man have seldom been scientifically made and carried out. In fact, in a graphic scale, it does not matter much which or how many traits or aspects of job behavior are listed. A general halo makes it almost impossible to get a clear picture of a man's strong and weak points.

5.

Ratings have not often been sufficiently con­

sistent and reliable to warrant much faith in them.

In

fact, many conventional procedures have not been checked for reliability. 6. time.

Conventional scales require too much training

Acceptable ratings have been obtained only as a res­

ult of continuous expensive training of the raters. An attempt is made to answer these criticisms in view of the results of this study in the concluding section of this chapter. It was the purpose of this study to develop and

90 explore the characteristics of a rating scale which could be used easily by relatively naive raters to give meaning­ ful and useful evaluations.

The rating scale study was

made in the setting of a more inclusive study of criteria of shipboard performance which involved psychological tests, performance tests, and performance check-lists in addition to the Rating Scale. A format was chosen for study which had been recomp mended by Guilford and which had been reported as having particular advantages by Stevens and Wonderlic^ and by ii Gilinski. With this format the rating scale differed from conventional types primarily in the fact that all men were rated in one trait at a time, thus making possible use of the fundamental techniques of ranking and paired-comparisons. A wide selection of traits was made and these were defined, as much as possible, in simple behavioral terms. The continua were qualified by four degrees of descriptive statements, only one of which was indicative of unsatis­ factory performance.

This was done in an effort to increase

2 Guilford, op. c i t . 3 Stevens and Wonderlic, op. c i t . ^ Gilinski, op. cit.

91 the number of ratings toward the lower end of the continuum* since it was felt that raters would more readily place a man in the ’’barely acceptable” category than in a failure category. The Rating Scale was administered to two populations of Enginemen and Electrician's Mates aboard submarines at San Diego.

The first sample was comprised of 187 such

personnel, and the second sample, gathered five to nine months later, totaled 286.

Ratings were made by superior

Petty Officers and by Commissioned Officers in charge.

An

average of two to three ratings was obtained on each man. Raters in the first sample were given short verbal instruc­ tions on the use of this particular rating format, and on common fallacies in the rating procedure.

For the second

administration, most raters received no more instruction than appeared in the rating booklet itself. Ratings were converted into raw scores by measuring the distance of each check mark from the end of the con­ tinuum with a centimeter rule.

Means and standard devia­

tions were computed for each rater on each trait and the raw scores transformed into standard scores in an effort to equate the means and variabilities of the various raters. Mean ratings were found to fall quite acceptably near the center of the continuum for the typical rater. Most raters also gave a sufficient spread of ratings for

92 useful discriminations.

A few gave a high proportion of

tie ratings which resulted in small dispersions and little or no discrimination.

For this reason the second group of

raters were specifically instructed to avoid ties if at all possible. The intercorrelations of rating scale traits, along with certain biographical data and pay grade, were deter­ mined under three conditions for each of the two samples, resulting in six correlation matrices in all.

The first

method was a straight-forward case of tetrachoric correla­ tion with pay grade operating as a variable.

In order to

determine the extent which pay grade was contributing to these intercorrelations, they were re-computed on a withinpay-grade basis.

This method of holding the effects of pay

grade constant was then checked by partialling the effects out statistically.

Thus there were three intercorrela­

tion matrices for each of the two samples. These matrices were then submitted to factor analy­ sis by the Thurstone method.

In the first sample, four

factors emerged from the original matrix, five from the re­ duced (within-pay-grade) matrix, and six from the partialled matrix.

This curious result reccurred in the second sample,

the original matrix of which yielded three factors, the reduced matrix five, and the partialled matrix, seven. Two types of factors we.re rather clearly identified

93 in each analysis of original data.

The first of these

was identified as a Technical Competence factor, indica­ tive of ability to do the job and highly saturated with pay grade.

The second type of factor seemed to reflect

Personal Adjustment, being related to effort, sincerity, cooperation, social adjustment, and so forth.

In the

analyses of the partialled and reduced matrices, factors of these two general types continued to appear, but their identification became less straight-forward.

The tech­

nical Competence factor appeared even with the effects of pay grade removed, indicating that it involved something more than pay grade variance alone. The inter-rater agreement on the various traits and on total score was computed for each sample under each of two conditions--with pay grade operating as a variable, and on a within-pay-grade basis.

In the first sample agree­

ment was increased in general when computed on a within-paygrade basis.

Agreement on the majority of single traits

was .50 or better, and on total score averaged .70.

In the

second sample agreement decreased somewhat when'computed on a within-pay-grade basis.

It is believed that pay grade

created a larger halo for this sample due to the lack of an opportunity to personally instruct raters.

Again the

agreement on the majority of single traits exceeded .50 , and averaged better than .65 on total score.

94

The reliability of the scale was determined on 115 non-selected cases which were in common to both samples. A period of from five to nine months elapsed between suc­ cessive administrations of the scale.

The computation of

reliability was for total score on a with-pay-grade basis and was found to be .88. Since other performance measures (which were to comprise a large part of the overall project to develop criteria of shipboard performance) were not completed at the time this report was begun, it was not possible to carry out the validation procedures anticipated at the beginning of the study.

An indication of validity was ob­

tained, however, when fifteen of the twenty Rating Scale traits were found to correlate significantly with a check­ list of performance items. No basic change in the format of the Rating Scale is suggested as a result of the findings of this study. Several recommendations can be made for its general im­ provement, however. In general there is need for redefinition and ela­ boration of many of the traits.

The statements of degree

should be tested by some technique such as equal-appearing intervals, to determine where they really fall along the continuum in the minds of the raters.

The agreement of

the raters in this respect should be determined.

95 The Rating Scale should be shortened in length considerably.

The three or four traits which contribute

most uniquely to the most clearly identified factors (Personal Adjustment, Technical Competence, and perhaps Carefulness) should be retained and redefined as described. The instructions to the raters should be made more explicit in an attempt to make the raters more conscious of the factors which they are asked to rate.

The traits

might profitably be grouped in the scale according to the factor to which they are most highly related.

Such group­

ings might be emphasized for the benefit of the raters by utilizing a particular color of paper for each group of traits which defined a particular factor.

II.

CONCLUSIONS

A Rating Scale was developed and used by relatively naive and uninstructed raters to evaluate the performance naval personnel aboard ship.

In terms of the criticisms of

conventional scales raised by Richardson,5 and reported in the beginning of this chapter, the scale reported "in this study would seem to be an improvement over conventional types.

Considering the several objections raised by

5 Richardson, op. cit.

96

Richardson, it was found that: 1.

The average man was not rated in the upper end

of the scale.

Mean ratings fell quite acceptably near the

centers of the continual

Few raters skewed their ratings

negatively to any marked degree.

A few even produced dis­

tributions with positive skewness. 2.

Since ratings were reasonably normally distri­

buted in general, differences in raters1 means and vari­ abilities were reduced by using standard scores. 3.

There was no difference in the discriminations

resulting from overall ratings and those resulting from supposedly more unique traits.

There was considerable dis­

crimination within pay grade on total score as w e l l . 4.

Although the analysis of job behavior which led

to the definition of traits in this study left something to be desired, the criticism by Richardson that halo ob­ scures differences from trait to trait in graphic devices was by no means entirely true for this scale.

Granted that

the halo due to pay grade or other factors was substantial in producing high intercorrelations, numerous fluctuations in the trait profiles of many men were noted.

At least

two independent factors were found to be rated, one of a technical and one of an adjustment nature. 5*

The reliability of the scale was as high as

that of most psychological tests, and higher than that

97 reported for the great majority of rating devices, whatever their type. 6.

The results of the first sample represent about

fifteen minutes explanation and training.

Practically no

training was given during the second sample, the resulting ratings being somewhat inferior but still- very useful. Thus it is felt that the objections made by Richard­ son to conventional type rating scales are largely inap­ plicable to the scale developed for use in this study.

Even

better results are being obtained currently on a variation of the form designed for private industry.

The results

suggest the following specific conclusions: 1.

The rating of all men on one trait at a time

has yielded ratings with greater reliability and more inter­ rater agreement than is generally reported. 2.

Relatively naive raters can use the method ef­

fectively and with little training.

Ratings can be made

very rapidly. 3.

Mean ratings and dispersions of ratings can be

controlled somewhat by the phraseology used along the continua.

In general mean ratings are smaller if only the

extreme lower end of the continuum represents unsatisfactory . performance. 4.

Instructions to avoid tie ratings increase dis­

criminations effectively.

98 5«

Discriminations among naval personnel can be

made reliably on a within-pay-grade basis. 6.

Inter-rater agreement may be raised or lowered

by holding constant a general factor such as pay grade. This result depends upon the extent to which individual raters are influenced by the general factor in making the ratings. 7«

* The factor dimensionality seems to increase

when a general factor

such as pay grade is held constant.

At the same time the factors become more difficult to identify. 8.

The factor structures of the Rating Scale was

not identical in the two samples, but there were similar­ ities. 9*

Holding the effects of pay grade constant by

considering variance within pay grade did not produce intercorrelations of traits identical with those obtained by statistically partialling out the effects of pay grade. However, these two procedures did remove about the same proportion of variance. As tions

a result of the study, the following recommenda­

for future work and research can be made: 1.

The Rating Scale should be effectively revised,

retaining those traits that are most objective and which contribute most uniquely to the factors of Technical

99 Competence and Personal Adjustment. 2.

The verbal descriptions of these traits, to­

gether with the statements of degree along the continua, should be evaluated in some systematic fashion to determine their importance in the total performance situation.

State­

ments of degree should be studied by some method such as T h u r s t o n e ^ equal-appearing intervals technique. 3.

The relationship of each sub-trait and of total

score should be determined with reference to some outside criterion such as performance tests now under construction. 4.

The phenomenon of increased factor dimension­

ality, with the removal of a general factor present in most or all variables, should be studied with regard to its genuiness and its importance, both theoretically and in a practical sense.

S E L E C T E D

B I B L I O G R A P H Y

BIBLIOGRAPHY

A.

BOOKS

Guilford, J. P . , Fundamental Statistics in Psychology and Education. New York: McGraw-Hill Book Company, 1942. 333 PP_______ , Psychometric Methods. New York: Book Company, 1936"! 56b pp.

McGraw-Hill

Mahler, W. R . , Twenty Years of Merit Rating (1926-1946). New York: The Psychological Corporation, 194773 PP-

B.

PERIODICAL ARTICLES

Almy, H. C., and H. Sorenson, "A Teacher Rating Scale of Determined Reliability and Validity,1' Educational Ad­ ministration and Supervision, 16, 19301 179-186. Barteau, Charles E., "A New Conception in Personnel Rat­ ing," Personnel, 13, 1936, 20-27Bittner, R. H., "Developing an Industrial Merit Rating Pro­ cedure," Personnel Psychology, 1, 1948, 403-432. Bittner, R. H., and E. A. Rundquist, "The Rank-Comparison Rating Method," Journal of Applied Psychology, 34, 1950* 171-177B^um, M. L., "A Contribution to Manual Aptitude Measure/ ment In Industry," Journal of Applied Psychology, 24, 1940, 381-416. Bolanovich, D. J., "Statistical Analysis of an# Industrial Relations Chart," Journal of Applied Psychology, 30, 1946, 22-31Bradshaw, F. F., "The American Council on Education Rating Scale: Its Reliability, Validity and Use," Archives of Psychology, 119> 1930, 80 pp.

101

Charapney, H., and H. Marshall, "Optimal Refinement of the Rating Scale," Journal of Applied Psychology, 23* 1939* 323-331. Chi, Pan-Lin, "Statistical Analysis of Personality Rat­ ings," Journal of Experimental Education, 5* 1937* 229-

245.

Davis, William S., "Factor Merit Rating System," Personnel, 22 , 1946, 309-319. j

Driver, R. S., "The Validity and Reliability of Ratings," Personnel, 17 , 1941, 185-191. Evans, J. W . , "Emotional Bias in Merit Rating," Personnel Journal, 28 , 1950, 290-291, "The Rater’s Task in Merit Rating," Personnel Jour­ nal, 28 , 1950, 375-375. Ewart, E., S. E. Seashore, and J. Tiffin, "A Factor Analysis of an Industrial Merit Rating Scale," Journal of Ap­ plied Psychology, 25* 1941, 481-486. Ferguson, L. W . , "The Value of Acquaintance Ratings in Crite\ rion Research," Personnel Psychology, 2, 1949* 93-102. ________, "A Brief Description of a Reliable Criterion of Job Performance," Journal of Psychology, 25* 1948, 389399. Flannagan, J. C., "A New Approach to Evaluating Personnel," Personnel, 26, 35-42, 1949 . V , "Critical Requirements: A New Approach to Em■'jSloyee Evaluation," Personnel Psychology, 2, 1949* 419425. Furfey, P. H., "An Improved Rating Scale Technique," Jour­ nal of Educational Psychology, 17* 1926, 45-48. GVllup, G. H., "Traits of Successful Retail Salespeople," Journal of Personnel Research, 4, 1926, 474-482. G^ r i s o n , K. C., and S. C. Howell, "The Relationship Between \ Character Trait Ratings and Certain Mental Abilities," Journal of Applied Psychology, 15* 1931* 378-389-

102

Geprge, Wally E., ’’How We Rate Apprentices,” Factory ManX a-genient and Maintenance, 95* 1937* 55 PP* Gilinski, A. S., "The Influence of the Procedure of Judg­ ing on the Halo Effect,” American Psychologist, 2, 1947* 309-310. Jorgensen, C. E., "A Fallacy in the Use of Median Scale f Values in Employee Check Lists,” Journal of Applied Psychology, 33* 1949*.56-59* Kidder, J. H. T., "Employee Rating Methods of Appraising Ability, Efficiency, and Potentialities,” National In­ dustrial Conference Board Management Record, 4, 1942, 33-^0. King, J. E., ”Multiple-Item Approach to Merit Rating,” American Psychologist, 4, 1949* 278 pp. (Abstract). Hausman, H. J., J. T. Begley, and H. L. Parris, "Selected Measures of Proficiency for B-29 Mechanics: Study No. 1,” Human Resources Research Laboratories Report #7* July, 1949. Knauft, E. B., ”A Classification and Evaluation of Person­ nel Rating Methods,” Journal of Applied Psychology, 31* 1947* 617-625. _______ , "Construction and Use of Weighted Check List Rat­ ing Scales for Two Industrial Situations,” Journal of Applied Psychology, 32, 1948, 63-70. Kornhauser, A. W., "Reliability of Average Ratings,” Jour­ nal of Personnel Research, 5* 1926, 309-317* Lawshe, C. H., N. C. Kephart, and E. J. McCormick, "The Paired-Comparison Technique for Rating Performance of Industrial Employees,” Journal of Applied Psychology, 33* 1949* 69-77* Mahler, W. R . , "Some Common Errors in Employee Rating Prac­ tices,” Personnel Journal, 26, 1947* 68-74. .V

* "An Experimental Study of Two Methods of Rating Employees,” Personnel, 25* 1948, 211-220.

Marble, S. D . , "A Performance Basis for Employee Evalua­ tion,” Personnel, 18, 1942, 217-226.

103 Markey, S. C., ’’Consistency of Descriptive Personality Phrases in the Forced Choice Technique,” American Psy­ chologist, 2, 1947, 310-311. Marsh, S., and F. Perrin, "An Experimental Study of Rat­ ing Scale Technique,” Journal of Abnormal and Social Psychology, 19* 1925* 383-399* Morrow, L., ”A New Approach to Merit Rating,” Modern Man­ agement , 9* 1949* 19-22. Patterson, C. H., ”0n the Problem of the Criterion in Prediction Studies,” Journal of Consulting Psychology, 10, 1946, 277-280. Pockrass, J. H., "Rating Training and Experience in Merit System Selection,” Public Personnel Review, 2, 1941, 211 - 222 . Richardson, M. W . , "Forced-Choice Performance Reports," Personnel, 26, 1949* 205-212. _______ > "An Experimental Study of the Forced-Choice Perform­ ance Report,” American Psychologist, 4, 1949* 273-279* (Abstract). Robertson, A. E., and E. L. Stromberg, "The Agreement Be ­ tween Associates* Ratings and Self-Ratings of Person­ ality,” School and Society, 50, 1939* 126-127. Ryan, T. A., "Merit Rating Criticized,11 Personnel Journal, 24, 1945* 6-15* Sisson, E. D . , "Forced-Choice— The New Army Rating,” Per­ sonnel Psychology, 1, 1948, 365-381 . Smith, I. S., "Developing a Service Rating System,” Educa­ tional and Psychological Measurement, 4, 1944, 327-337* Sfcevens, S. N., and E. F. Wonderlic, "An Effective Revision of the Rating Technique,” Personnel Journal, 13* 1934, 125-134. slockford, L., and H. W. Bissell, "Factors Involved in Estab­ lishing a Merit Rating Scale,” Personnel, 26, 1949* 94-

116 .

104 Tailor, E. K., “What Raters Rate," American Psychologist, Jv 3 , 1948, 289-290. (Abstract). Tiffin, J., and W. Musser, “Weighting Merit Rating Items,“ Journal of Applied Psychology, 26, 1942, 575-583.

b u r n e r , W. D . , “A Multiple Committee Method of Merit Rat­ ting,"

Personnel, 25, 1948, 176-194.

Viteles, M. S., “A Psychologist Looks at Job Evaluation,“ Personnel, 17, 1941# 165-176 . Wadsworth Jr., G. W . , “The Field Review Method of Employee revaluation and Internal Placement,” Personnel Journal, 27, 19^8 , 47-54. W^instock, I., “Merit Rating--A Restatement of Principles,” ^ N Personnel Journal, 27, 1948, 223-226. Weiss, R. A., “Rating Scales,” Psychological Bulletin, 30, 1933, 185-208 . Wherry, R. H., and D. H. Fryer, “Buddy Ratings: Popularity Contest or Leadership Criteria?” Personnel Psychology, 2, 1949, 147-159. W^ebe, G. D . , “A Comparison of Various Rating Scales Used in Judging the Merits of Popular Songs,” Journal of Applied Psychology, 23, 1939, 18-22. WiH^e, W. H., "The Reliability of Summaries of Rating Scale Evaluations of Student Personality Traits,” Journal of Genetic Psychology, 53, 1938, 313-320. Zerga, J. E., “Developing an Industrial Merit Rating Scale,” Journal of Applied Psychology, 27, 1943, 190-195* Z^nmerman, W. S., “A Simple Graphical Method for Orthogonal * Rotation of Axes," Psychometrika, 11, 1946, 51-55*

A P P E N D I X

A

105

PRC Form

REPORT OF PERFORMANCE FOR ENLISTED PERSONNEL Date

No. RF 101 (ONR Research Project 70001) U.S.S.

Division

Information on officer or P.O. doing the rating: Name:

Rank or Rate

Administrative position: Length of time acquainted with this division:

Yrs.

mos.

THE FOLLOWING INSTRUCTIONS SHOULD BE FOLLOWED CAREFULLY: On the following pages you will find listed the names of the men in a division. They are to.be rated by you on several qualities which are important for success in the Navy. The qualities are described at the top of each page and descriptive statements are made along the side of the page. As you read these statements you will notice that some of them describe men in the division. However some men may not be described by any one statement but may seem to fall between two statements. You should rate each man with a check mark opposite the statement (or somewhere between the statements) which you think come closest to describing him. It is important that you consider each man in com­ parison to the others in his division as you rate each quality. Obviously there will be a few outstanding men in each group and a few who are not so good, while most of them will fall somewhere in between. Perhaps it will be a help to rate the poorest and best man in each quality first, and then try to rate the rest of the group compared to these two. The first page is a sample of how a completed rating might look. Study it carefully and then proceed with your own ratings.

106 U. S. NAVAL MEDICAL RESEARCH LABORATORY U. S. Naval Submarine Base New London* Connecticut 30 July 1948

TO WHOM XT MAY CONCERN: 1. This letter will introduce L t . Comdr. Clark L. WILSON, Jr., USNR, inactive, qualified for command of submarines, a psychologist of the Psychological Re­ search Center, Los Angeles, California, who is engaged under contract with Office of Naval Research, Navy De­ partment, in matters having to do with personnel selec­ tion and performance. 2. It has been arranged that Dr. Wilson and his staff will cooperate with this laboratory in attempt­ ing to obtain certain performance information on a group of submarine enlisted personnel upon whom extensive studies were made in this laboratory at the time of their entrance into the submarine force. 3. Officers of the Navy will be of service to the Naval establishment and the submarine force in particular, by cooperating in every possible way with Lieut. Comdr. Wilson in his efforts.

/s/ T. L. Willmon

C 0 P Y

T. L. WILLMON Captain (MC) USN Officer in Charge

[o

LEADERSHIP

o CVJ

o

\ —I

o oo

\

I

Si

Is this man usually a leader in his gang? ' Does he tend to be forceful? Do others carry out his instructions? Has superior ability to lead his gang. He is forceful. He gets things done. Men willingly follow his advice and orders.

w

CO

u 0) X

£ o £

cti

pq

pq

S w -p £ cd £ O pq

From the sample ratings we can see that: 1. 2.

Tends to be a leader. Often dir­ ects work of his gang. His ideas are usually accepted. Sometimes shows some leadership but usually is just a good follow­ er. Has a few suggestions. Not real forceful. Always a follower. Never has any suggestions of his own. Would have a hard time giving orders and getting results.

3.

4.

PETERS is the real leader of the group. BAKER and MILLER are good leaders, MILLER is a little better than BAKER. SMITH and BROWN usually are not leaders but are good followers and might become leaders. GRANT shows no leadership at all.

108

SOCIAL ADJUSTMENT

How well does this man get along with the rest of the crew? Does he often complain? Is he well liked by most of his mates? Has very few friends. Is always griping about something. Does not help morale in the boat. Usually gets along okay. Has his share of friends. Usually fairly cheerful, but has a few gripes.

Gets along with almost everybody. Many friends. Has very few gripes. Usually cheerful. One of the most popular men on board. A friend to every­ body. Almost always cheerful. A help to morale.

QUALITY OF WORK

How does this man compare with others of his rate on doing a good job? Does he make few or many errors compared to others?

Does very fine work. Takes a great deal of care in doing a job. Hardly ever makes a mis­ take . Usually does high grade work. Makes a mistake once in a while but doesn’t repeat it. Usually quite careful. Does work which is usually ac­ ceptable. Sometimes is care­ less and makes mistakes but not too often. Often does unsatisfactory work. Makes many mistakes. Often careless.

Often has a sloppy or dirty uniform. Careless about personal appearance.

Keeps his uniform fairly neat and clean. Appearance is usually okay but gets sloppy once in.a while.

Usually keeps a very neat uniform. Hardly ever presents a sloppy or dirty appearance.

Always keeps uniform extremely neat and clean. Very careful about personal appearance.

Goes out of his way to help others. Accepts orders willingly and cheerfully. Never passes the buck. Usually is a help to others. Never causes any trouble over working with others. Usually accepts orders willingly. Works with others okay. Does not go out of his way for others but is not all for himself either. Accepts orders with­ out griping. Never does anything for any­ body unless ordered. Often passes the buck. Tries to avoid the job. All out for himself.

cu rH rH

WATCH STANDING

How alert is this man while standing his watch? Does he respond to orders quickly? Does he report information quickly and accurately? Often dopes off on the watch. Sometimes misses orders or fails to report information. Usually stands a fair watch. Does not often let down, but sometimes should report more quickly or more accurately. Almost always stands an efficient watch. Responds to orders quickly. Makes accurate reports. Always stands an alert efficient watch. Reports all information and responds to orders at once. Outstanding watch stander.

Has excellent knowledge of the job. Knows all the fine points involved in his rate.

Has very good knowledge of the more important duties of his rate. Does not need much help on the job.

Knows his job fairly well, but still has quite a bit to learn. Sometimes needs help on the job.

Has much to learn about his rate. Often needs help to get the job done.

-=t

rH H

DISCIPLINE

How often does this man get himself into trouble? (Consider condhct both ashore and afloat).

Never gets into trouble. Never needs to be spoken to.

Hardly ever gets into trouble. Never needs more than a warning.

Conduct is usually okay but gets into trouble once in a while. Easily straightened out. Quite often gets into trouble. Often ignores warnings. Conduct is a real problem.

Tries to get out of work. Needs to be pushed in order to get a job done. He is lazy.

Usually does his share of the work, but does not_ do much extra.

Keeps busy without being told. Does more work and works harder than many in his group.

Always busy doing something. Works very hard. Looks for extra work without being told. A Workhorse.

Can rely completely on his ability to make a good decision. Handles emergencies well. Never requires much advice. Usually makes correct decisions and shows good judgment. Does not need advice very often.

Might or might not make a good decision, but usually uses common sense. Needs advice once in a while. Often makes unsound judgments or decisions. Sometimes seems to lack common sense.

Never seems to do quite right. May job at all. Must up on what he has

the job not do the always check done.

Usually follows instructions. Needs to be checked on once in a while. Does not often foul up the job. Willing to accept responsibility. Almost always gets job done right. Needs little check-up or supervision. Does top grade work with hardly any supervision. Outstanding in his ability to get the job done.

Meets changing conditions easily. Outstanding in his ability to do new jobs. Completely at home in subs. Adjusts to new conditions in short time. Learns new jobs with very little instruction.

Learns new jobs fairly well if given enough instruction. A little slow to learn but okay.

Slow to learn new jobs. Has trouble adjusting to changing conditions. Needs much,in­ struction.

Always a follower. Never has any suggestions of his own. Would have a hard time giving orders and getting results. Sometimes shows leadership but usually is just a good follower. Has few suggestions Not real forceful. Tends to be a leader. Often directs work of his gang. His ideas are usually accepted

Has superior ability to lead his gang. He is forceful, get things done. Men willingly follow his advice and orders.

In the upper 25$--one of the very best in submarines.

In the second highest 25$-definitely above average.

In the third 25$--performs okay. Satisfactory but not one of the better submariners.

In the lowest 25$--somewhat below standard in general. Needs to improve.

Work is often sloppy. Leaves gear lying around. No method to his work. Doesn’t clean up.

Work is not real neat but not too sloppy. Work habits are accept­ able .

Generally a neat worker. Methods of work are good. Never lets gear get astray or fouled up.

One of the neatest workers aboard. Takes pride in efficient methods of doing jobs. Keeps gear in strict order.

It is a pleasure to explain things to this man. He is interested in what you say and catches ideas very rapidly. Wants to learn. Gets basic ideas in good shape. Asks questions on technical points which he does not get immediately. Listens to what you say but often takes some time before he under­ stands new technical ideas. Mode­ rately eager to learn. Does not pay much attention to what you are trying to teach him. Knows it all. Not interested in learning.

Treats all equipment and tools with a great deal of respect. Takes great care to see that equipment is properly used and cared for.

Treats equipment and tools reason­ ably well. Tends to be more care­ ful of his own equipment than that of others in the boat. Occasionally is less careful than he should be in the operation and care of equipment and tools.

Very often is careless with equip­ ment and tools. Does not have proper respect for their treatment and care.

Never seems to know where to look for the trouble. Is likely to look anywhere without particular reasons

Has some difficulty locating trouble but usually can work it out if given enough time. Not too logical about where he looks for the trouble. A good troubleshooter. Efficient in the way he locates the diffi­ culty. Usually looks in the most logical places for the trouble. One of the best troubleshooters in the boat. Always seems to go right to the source of the trouble. Very efficient and quick on repair.

Always continues at the job until the very best job has been done under the circumstances. Usually is quite sincere in getting a good job done. Occasionally falls a little short of the best.

Does the job but not always in the best manner. Sometimes fluffs off but generally does a satisfac­ tory job. Often turns out a sloppy job. Gets careless and hurries through a job without regard to quality of work.

Very clumsy In operations requiring use of the hands. Often makes mis­ takes and ruins repair materials.

Does a fair job in manual opera­ tions but sometimes is guilty of sloppy work. Once in a while ruins a part or some material. Does fine manual work. Very seldom ruins any material. Repair work usually is very perminate in nature.

One of the most skilled man on board with his hands. A skilled craftsman in all forms of manual work.

h

OVERALL PROFICIENCY IN RATE

Compare this man with others of the same rate and experience. How well does he stack up on what he knows and how he works?

One of the very best of his rate in submarines. Shows great promise. Definitely in the top 25$.

Falls in the second 25$. Has very good knowledge of his rate compared to others of similar experience. Does fine work. Has fair the work superior In third

knowledge of his rate and he does, but not one of the men at his level of rate. 25$-'

Does not measure up in skill or knowledge of other men of the same rate. Needs improvement. Falls in fourth 25$.

w S s

128

REPORT OF PERFORMANCE FOR ENLISTED PERSONNEL

PRC Form

Date

No. RF 105 (ONR Research Project 70001)

U.S.S.

Division

Information on officer or P.O. doing the rating: Name:

Rank or Rate:

Administrative position: Length of time acquainted with this division:

yrs.

mos.

On the following pages you will find listed the names of the men in a division. They are to be rated by you on several qualities which are important for success in the Navy. The qualities are described at the top of each page and descriptive statements are made along the side of the page. As you read these statements you will notice that some of them describe men in the division. However some seem to fall between two statements. You should rate each man with a check mark opposite the statement (or somewhere between the statements) which you think comes closest to describing him. The first page is a sample of how a completed rating might look. Study it carefully and then proceed with your own ratings. YOUR RATINGS WILL BE MOST VALUABLE IF YOU FOLLOW THESE TWO SUGGESTIONS: 1)

Consider each man in comparison to the others in his division as you rate each trait. There will be a few outstanding men in each group and a few who are not so good, while most of them will fall somewhere in between. It will help to rate the poorest and best man in each quality first, and then try to rate the rest of the gang compared to these two.

2)

Try to avoid giving ties. Before you rate any two men exactly equal, consider carefully whether or not some small difference between them.

ay h

LEADERSHIP

0

0 0 00

CVJ

a Is this man usually a leader in his gang? Does he tend to be forceful? Do others carry out his instructions?



w u