Advanced structural equation modeling : issues and techniques 9780805818192, 0805818197, 9781315827414, 1315827417

938 125 10MB

English Pages [375] Year 1996

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advanced structural equation modeling : issues and techniques
 9780805818192, 0805818197, 9781315827414, 1315827417

Citation preview

Edited by

George A. Marcoulides Randall E. Schumacker

Advanced Structural Equation Modeling Issues and Techniques

This page intentionally left blank

Advanced Structural Equation Modeling Issues and Techniques

Edited by

G eorge A. M arcoulides California State University, Fullerton

R andall E. Schum acker University of North Texas

v p Psychology Press Taylor & I rancis Croup New York London

First Published by Lawrence Erlbaum Associates, Inc., Publishers 10 Industrial A venue M ahwah, New Jersey 07430-2262 Transferred to Digital Printing 2009 by Psychology Press 270 Madison Ave, New York NY 10016 27 Church Road, Hove, East Sussex, BN3 2FA Copyright © 1995 l>y Lawrence Erlbaum Associates, Inc. All rights reserved. No part of this book may be repro d u ced in any form , by photostat, microfilm, retrieval system, o r any o th e r m eans, w ithout the p rio r written perm ission o f the publisher.

Cover design by Gail Silverman

Library o f Congress Cataloging-in-Publication Data Advanced structural equation m odeling : issues an d techniques / ed ited by G eorge A. M arcoulides, Randall E. Schum acker. p. cm. Includes bibliographical references an d index. ISBN 0-8058-1819-7 1. M ultivariate analysis. 2. Social sciences— Statistical m ethods. 1. Marconi ides, G eorge A. II. Schum acker. Randall E. Q A 278A27 1996 519.5'35— dc20 95^34310 CIP P ublisher’s Note The publisher has gone to great lengths to ensure the quality o f this reprint but points out that some imperfections in the original may he apparent.

Contents

Acknowledgments 1

Introduction George 4. Bollen

227

Full Inform ation Estim ation in the Presence of Incom plete Data James L Arbuckle

243

10

Inference Problem s W ith Equivalent Models Larry J. Williams, Hamparsum Bo-Aogan, and Lynda Aiman-Smilh

11

An Evaluation of Increm ental Fit Indices: A Clarification o f M athem atical and Empirical Properties Herbert W. Marsh, John R Balia, and Kit-Tai H au

279

315

A uthor Index

355

Subjcct Index

361

About the A uthors

363

Acknowledgments

We would like to thank all o f the contributors for their time and effort in p reparing chapters for this volume. T hey all provided excellent chapters and th e authors worked diligently with us throug h the various stages of the publication proccss. W ithout them , this volume would not exist. We would also like to thank Larry Erlbaum , Ju d ith Amsel, K athleen Dolan, Kathryn Scornavacca, and the rest o f the editorial staff at Lawrence Erlbaum Asso­ ciates for th e ir assistance and su p p o rt in putting to g eth er this volume. Fi­ nally, we thank ou r families for their love an d for en d u rin g yet a n o th e r project. — Geurge A. Marcoulides — Randall E. Schumacker

This page intentionally left blank

CHAPTER

ONE

Introduction G eo rg e A. M arcoulides California Slate University, Fullerton R an d all E. S ch u m a ck e r University of North Texas

Structural equation models (SF.Ms) are used hv biologists, educational re­ searchers, market researchers, medical researchers, psychologists, social sci­ entists, an d others who traditionally deal with n o n ex p erim en tal and quasi-experiinental data. O ne reason for the pervasive use in almost every scientific field of study is that SEMs provide researchers with a com prehen­ sive m ethod for the quantification and testing of theories. In fact., it has been suggested that the development of SEM is perhaps the most im portant and influential statistical revolution to have recently occurred in the scien­ tific arena (Cliff, 1983). SEM techniques are accepted today as a major com ponent o f applied multivariate analysis. The use of the term structural equation modeling is broadly defined to accomodate models that include latent variables, meas­ urem ent errors in both dependent and independent variables, multiple indicators, reciprocal causation, simultaneity, and interdependence. As im­ plem ented in most commercial com puter packages (e.g., Amos, EQS, LISREL, LISCOMP, Mx, SAS PROC-CAUS, ST ATI ST ICA-SEPAT H ), the m ethod includes as special cases such procedures as confirmatory factor analysis, multiple regression, path analysis, models for tim e-dependent data, recursive and nonrccursive models for cross-sectional and longitutinal data, and covariance structure analysis. The purpose of this volume is to introduce the latest issues and devel­ opm ents in SEM techniques. The goal is to provide an understanding and working knowledge of advanced SEM techniques with a minim um of 1

2

MARCOULIDES AND SC11UMACKEK

mathematical derivations. The strategy is to focus primarily on the appli­ cation of SEM techniques in example cases and situations. We believe that by using this didactic approach, readers will better understand the under­ lying logic of advanced SEM techniques and, thereby, assess the suitability of the m ethod for their own research. As such, the volume is targeted to a broad audience across multiple disciplines. It is assumed that the reader has mastered the equivalent of a graduate-level multivariate statistics course that included coverage of introductory SEM techniques. The chapters in this volume provide a timely contribution to the literature on SEM. The theoretical developments and applications of SF.Ms are burgeoning, and so this volume could have easily becomc several volumes. Unfortunately, within the limitations of a single volume, only a limited num ber of topics could be addressed. Each chapter included in this volume was selected to address what we believe are currently the most im portant issues and developments in SEM. T he topics selected include models for m ultitrait-m ultim ethod (MTMM) m atrix analysis, nonlinear structural equation models, multilevel models, cross-domain analyses o f change over time, structural time series models, bootstrapping techniques in the analysis of m ean and covariance structure, limited inform ation estimators, dealing with incomplete data, problems with equivalent models, and an evaluation of increm ental fit indiccs. OVERVIEW OF THE VOLUME Some 35 years ago, Campbell and Fiske (1959) presented the MTMM ma­ trix as a device to assess convergent and divergent validity. Although the qualitative criteria that Campbell and Fiske proposed to ju d g e convergent and divergent validity are quite well known, what is the best m ethod to evaluate quantitatively these two validities is not yet known. In chapter 2, Wothke concentrates on three quantitative approaches to MTMM analysis: confirmatory' factor analysis, covariance com ponent analysis, and the direct product model. The three approaches are presented in formal notation and illustrated with worked examples. In addition, model specifications of the worked examples for the Amos (Arbuckle, 1995), LISRF.l. (Joreskog & Sorbom, 1993), and Mx (Neale, 1994) program s are provided in an appendix. It is shown that covariance com ponent analysis and the direct product model can yield sim ultaneous estimates of method and trait com­ ponents. However, there are many cases where these two approaches pro­ vide inadmissible solutions or very poor fit. In contrast, the confirmatory factor-analytic approach generally provides admissible solutions but does not yield reliable results when both correlated trait and m ethod factors are included in a m easurem ent design. T he chapter concludes by emphasizing the im portance of carefully conceptualized MTMM m easurem ent designs.

1. INTRODUCTION

3

Models with interaction and nonlinear effects are often encountered in the social and behavioral sciences. In SEM, estimating the interaction ef­ fects of latent variables is m ade possible by using nonlinear and product indicators. In chapter S, Joreskog and Yang investigate the interaction effects model proposed by Kenny and Judd (1984) as a general model with nonlinear relationships between latent variables. Using both articifkial and empirical data from example cases that involve interactions am ong latent variables, the authors dem onstrate how nonlinear models can be estimated. Model specifications and output using the I.ISREI, program are provided throughout the chapter. The chapter concludes by providing some general guidelines researchers must consider before attem pting to model nonlinear relationships am ong latent variables. In chapter 4, McArdle and Hamagami present an overview o f the multilevel modeling approach from a multiple group SEM perspective. Multilevel models (also referred to as random coefficient, variance com po­ nents, or hierarchical linear models; Bryk & Raudcnbush, 1992) have recently become popular because they perm it researchers to deal with estimation problems in complex nested data collection designs. The authors examine the major models of multilevel analysis (i.e., the variance com po­ nents model, the random coefficients model, the logit model, longitudinal growth models, and multilevel factor models) and attem pt to relate these models to standard SEMs for multiple groups. Using examples of multilevel models from their own work on individual diil'erences in hum an cognitive abilities, the authors illustrate some of the benefits and limitations of the multilevel approach. In chapter 5, Willett and Sayer dem onstrate how the m ethods of indi­ vidual growth modeling and covariance structure analysis can he integrated to investigate inlerindividual differences in change over time. T he individ­ ual growth-modeling approach uses a pair of hierarchical statistical models to represent two distinct models: (a) individual status as a function of time (the so-called “I.evel 1" m odel), and (b) interindividual differences in true change (the so-called “Level 2” model). In the covariance structure ap­ proach, these two models can be reform atted as the “m easurem ent” and “structural” com ponents of th e general LISREL model with m ean struc­ tures, and their param eter estimates can be determ ined. Using longitudinal data drawn from the National Child Development Study (a longitudinal study of all children born in Britain between March 3-9, 1958), Willett and Sayer explain and dem onstrate how the new approach can be used to investigate the in te rre latio n sh ip s am ong sim ultaneous individual changes in two dom ains (reading and mathematics achievement) over the entire school career, for both healthy children and children with chronic illness. Generalizations of the approach to m ore than two domains are also discussed, and sample LISREL programs (Joreskog & Sorbom. 1993) are provided in an appendix.

4

MARCOULIDES AND SCHL'MACKF.R

Tim e series models attem pt to describe the behavior o f a process across m ultiple occasions (Harvey, 1981). To date, tim e scries models have not received extensive usage in th e social and behavioral sciences. T he most cited reasons for this neglect have been the need to collect data across num erous occasions, and the m athem atical complexity o f the models (Schmitz, 1990). In ch ap ter 6, H ershberger, M olenaar, and C orneal describe how various tim e series m odels can be analyzed using an SEM approach. For each time series m odel presented, the authors provide a basic description o f the model, how the m odel is analyzed in SEM, and the results o f fitting the m odel to an exam ple data set. In addition, I.ISREI. instructions an d m odel specifications o f th e exam ple data sets arc provided in an appendix. In ch ap ter 7, Yung and B entler focus on the application of bootstrapping techniques to covariance structure analysis, o f which exploratory an d con­ firm atory factor analysis arc considered the leading cases. In general, bootstrapping techniques involve the draw ing o f random subsamples from an original sam ple an d estim ating various param eters from each subsam ple for the purpose o f studying som e characteristic of th e distribution o f these param eter estimates (F.fron & T ihshirani, 1993). U sing real data, the authors exam ine the application o f bootstrap m ethods suitable for covariance structure analysisand determ ine w hether the bootstrap "works” for the given situations. T he authors also present two new applications o f the bootstrap approach to mean and covariance structures. Estim ators for SEMs fall in to two classes: full-inform ation an d limited-infonnation estimators. Eull-inform ation estim ators attem p t to derive param e­ ter estimates in a m odel by using all inform ation in the sets o f equations sim ultaneously (e.g., m axim um likelihood, generalized least squares, weighted least squares). L im itcd-inform ation estim ators atternpt to derive param eter estimates in a m odel, one equation at a time. U nfortunately, a critical draw back with full-inform ation estim ators is that any specification erro r in the set o f equations can bias the estim ates th ro u g h o u t the system. In contrast., the use o f lim ited-inform ation estim ators results in the bias som etim es being isolated to either one equation o r ju st part of the system. In chapter 8 , Bollen presents a two-stage least squares (2SLS) lim ited-infor­ m ation estim ator for structural equation m odels with latent variables. Using exam ple data, Bollcn explains and dem onstrates how the proposed 2SLS estim ator can be used in various SEMs, including those with heteroscedastic errors. In chapter 9, Arbuckle discusses the application of m axim um likelihood (ML) estim ation as an alternative m ethod for h an dling missing data. In­ com plete or missing data are routinely en co u n te re d in structural equation modeling, an d are even a byproduct from the use o f ccrtain designs (e.g., Balanced Incom plete Block, Spiraling Designs; Brown, 1994). T o date, a considerable num ber of approaches for dealing with missing data have

1. INTRODUCTION

5

been proposed in the literature (Little 8c Rubin, 1989). T he most com m on approaches arc ad hoc adjustm ents to the d ata th at attem p t to rem edy the problem either by removal or replacem ent of values. SEMs arc built on the assum ption that the covariance m atrix follows a W ishart distribution (Brown, 1994; Joreskog, 1969). As such, com plete data are required for the probability density function estim ation, and ad hoc adjustm ents must be m ade to the data set when th e re arc missing values. C u rren t practice in the handling o f missing data for SEMs appears to be dom inated by the m ethods som etim es known as “pairwise d eletio n ” an d “listwise deletion.” In Arbuckle’s chapter, two sim ulations are presented in which the per­ form ance of a full-inform ation ML estim ation m ethod is com pared to the perform ance o f com peting missing daw techniques. T h e results d em o n ­ strate both the efficiency and the reduced bias o f ML estimates. It appears that ML estim ation with incom plete data should be the p referred m ethod o f treating missing data when the alternatives arc pairwise deletion and listwise deletion. T he problem o f equivalent m odels has been known since th e earliest developm ental stages o f SEM. Equivalent models are those that provide identical statistical fit to th e data as th e hypothesized m odel but may imply very different substantive interpretations o f the data. In ch ap ter 10, Wil­ liams, Bozdogan, an d Aiman-Smith provide an overview of problem s asso­ ciated with testing equivalent m odels and propose the use o f a novel inform ation com plexity approach to assess m odel fit for equivalent models. Using results of analyses from both sim ulated and real sam ple data, the authors dem onstrate how to assess m odel lit am ong the equivalent models and how substantive conclusions about relationships in m odels can be im pacted when selecting one m odel from am ong a set of equivalent models. In order to assess the goodness o f fit of SEMs, researchers generally rely on subjective indices of fit (Bentler, 1990). Although different types of indices have been proposed in the literature, n o concensus has been reached as to which are die host (Tanaka, 1993). Currently, the most popular indices are from the family o f increm ental fit indices (Bollen & Long, 1993). T he final chapter by Marsh. Balia, an d H au, exam ines seven increm ental fit indices in relation to their m athem atical an d empirical properties in a large M onte Carlo study. T hese seven increm ental fit indices are provided as standard ou tp u t in most commercially available SEM com ­ pu ter packages. T he authors recom m end that the rionnorm ed fit index (o r its norm ed counterpart the norrned Tucker-Lew is index), an d the relative noncentrality index (or its norm ed co u n terp art the com parative fit index) should be preferred w hen evaluating goodness o f fit o f SEMs. T he norm ed fit index, the increm ental fit index, and the relative fit index are not recom m ended because they are systematically biased by sample size and can inappropriately penalize m odel complexity.

6

MARCOULIDES AND SCHUMACKER

REFERENCES A: buckle, J . L. (1995). Amor for Windows. Analysis of moment structures (Version 3.5). Chicago, IL: Smallwaters. Bentler, P. M. (1990). Com parative fit indices in structural models. Psychological Bulletin, 107, 238-246. Bollen, K. A., 8c Long, ). S. (1993). Introduction. In K. A. Bollen & J. S. Long (Eds.). Testing structural equation models. Newbury Park, CA: Sage. Brown, R. I.. (1994). Efficacy o f the indirect ap proach for estimating structural equation models with missing data: A com parison o f five m ethods. Structural liquation Modeling, I, 287-316. Bryk, A. S., & Raudenbush, S. W. (1992), Hierarchical linear models: Applications and data analysis methods. Newbury Park, CA: Sage. Cam pbell, D. T-, & Fiske, f). W. (1959). C onvergent an d discrim inant validation by the m ultitiait-m ultim elhod m atrix. Psychological Bulletin, *>6t 81-105. Cliff, N. (1983). Some cavitions concerning the application of causa! m odeling m ethods. Multivariate Behavior(d Research, 18, 115-126. lifron, 15., 8c T ibshirani, R. J. (1993). A n introduction to the bootstrap. New York: C hapm an Sc Hall. Harvey, A. C. (1981). Time-series models. O xford: Philip Alan. Joreskog, K. G. (1969). A genera) approach to confirm atory maximum likelihood factor analysis. Psychometrik/i, 34, 18^-202. Joreskog, K. G., 8c Sorbom, D. (1993). IJSRtll. S. Structural cf/uution modeling with the S/MPIJS command l/inguage. Chicago, IL: Scientific Software International, Kenny, D. A., & Judd, C. M. (1984). Estimating th e nonlinear and interactive effects o f latent variables. Psychological Bulletin, 96, 201-210. Little, R. J. A., & Rubin, D. B. (1989). T he analysis o f social scicncc data with missing values. Sociological Methods and Research, IS, 292-326. Neale, M. C. (1994). Mx: Statistical modeling (2nd ed.). Richm ond, VA: A uthor. Schm itr, B. (1990). Univariate and multivariate time-series models: T h e analysis o f intraindividual variability and inicrindividual relationships. In A. Von Eye (fcd.)> Statistical methodv in longitudinal research (Vol. 2). New Yotk: Academic Press. Tanaka, J. S. (1993). M ultifaceted conceptions o f fit in structural equation models. In IC. A. Bollen & J. S. Long (Eds.), Testing structural etju/Uion models. Newbury Park. CA: Sage.

C HAP TE R

TWO

Models for Multitrait-Multimethod Matrix Analysis W e r n e r W o th k e SmxiltWaters Corporation, Chicago

T h e well-known contribution by Cam pbell an d Fiske (1959) p resen ted the m ultitrait-m ukim ethod (MTMM) m atrix form at as a device to study trait validity across different assessment m ethods. T he crossed m easu rem en t de­ sign in the MTMM m atrix derives from a sim ple rationale: Traits are uni­ versal, manifest over a variety' of situations an d detectable with a variety of m ethods. Most im portantly, the m agnitude of a trait should n o t change ju s t bccause different assessment m ethods arc used. T his chapter discusses several m ultivariate statistical m odels for analyzing the MTMM matrix. T h ro u g h o u t the following, the term s trait and measure will frequently appear. T h e trait concept is always used in psychometric term s, as a "latent” variable (I.ord & Novick, 1968, p. 530). Its use does n o t imply the prcscnce o f an endu rin g personality disposition. As used here, the term trail could be applied equally to m agnitude perceptions or attitudes. The o th e r term , measure, designates an observed variable like a test score, rating, an d the like. A measure is directly observed an d is usually subject to m easurem ent error. This is in contrast to traits, which arc free o f erro r but no t dirccdy observable. Ideally, if a MTMM covariance m atrix contains m equivalent sets of m easures o f I traits, each utilizing a different m eth o d o f assessment, it will have the structure

7

8

WOTHKE

+ diag(0(l V

( 1)

^

where X is the ml x ml covariance m atrix am ong all m easures, 1 is an m x 1 vector o f u n it values, ® symbolizes the (right) K ronecker p roduct o p era to r (c.f. Bock, 1975), is the t x t covariance matrix am ong trait measures within each m ediod, and diag(6 (i,i), 0(i,2), - . - , 0 («n,i)) >s the diagonal matrix o f uncorrelated uniqueness com ponents of the ml measures. A trivia) im plication o f m odel E quation 1 is that all m easurem ents must be on a com parable scale. T his is the same as th e psychom etric concept of T-equivalent m easurem ent ( [orcskog, 1971; Lord & Novick, 1968). In the behavioral and social sciences, w here such diverse m ethods as test scores, behavioral observations, and one-item ratings arc often being com pared, strong scale assumptions are not usually warranted. T hese tields have typically regarded scale inform ation as arbitrary or of little interest and, since Spear­ m an's days, have had a tradition o f analyzing correlaiion m atrices, thereby neglccting any inform ation due to the original scale of m easurem ent. Following the scale-free tradition, Campbell and Fiske (1959) avoided the strict term inology of E quation 1 and proposed, instead, several qualitative criteria to ju d g e convergent an d discrim inant validity (m ore recently re­ viewed by Schmitt & Stults, 1986). These criteria are based on direct com parisons am ong correlation coefficients, ap p ear to be rigorous, and d o not require com plicated statistical com putation. Yet, while the Cam pbell and Fiskc criteria will often detect m ethod effects, they can be equally violated in the absence of m ethod effects when m easures merely differ in reliability (W othke, 1984). T h e Campbell an d Fiske criteria can therefore be unneces­ sarily restrictive com pared to th e m ore recent psychom etric concepts of T-equivalent and congeneric m easurem ent (Joreskog. 1971). Also, because of their lack of a statistical basis, th e Campbell an d Fiskc criteria should be replaced by quantitative rules th a t are sensitive to sam pling e rro r (see, e.g.. Althauser, 1974; A lthauserScH eberlein, 1970; Althauser, H eberlein, & Scott, 1971). This chapter concentrates o n th ree quantitative approaches that easily le n d them selves to MTMM analysis: confirmatory fac.Un- analysis (CFA;

2. MODF.15 FOR MTMM ANALYSIS

9

Joreskog, 1966,1971,1974, 1978), covariancecomponent analysis (CCA; Bock, 1960; Bock & Bargm ann, 1966; Wiley, Schm idt, 8c Bram ble, 1973), and the direct product model (DPM; Browne, 1984a; Cudeck, 1988; W othke 8c Browne, 1990). CFA and CCA are im m ediate realizations o f the multivariate linear model. T h e DPM, while having a distinctly n o n lin ear appearance, can, nevertheless, be expressed as a constrained version o f second-order factor analysis (W othke & Browne, 1990). All th ree approaches are there­ fore instances of the linear m odel and its host of statistical theory, but they arc structurally distinct and derive from different statistical traditions: CFA is rooted in the psychometric tradition of validity theory, as outlined by Lord and Novick (1968); CCA is a m ultivariate generalization of random effects analysis o f variance, based on R. A. F isher’s work; an d the DPM is a new anim al altogether. T he rem ainder of the ch ap ter is organized as follows: T he next section describes two com m only used factor m odels for the MTMM matrix; one having only trail factors, the o th e r including additional m ethod factors. It illustrates their application with worked exam ples an d discusses frequently occurring identification problem s o f th e second model. 1 th e n present a structural m odeling framework Tor covariance com ponent analysis, state identification conditions for its fixcd-scale an d scale-frcc variants, and il­ lustrate the application of the scale-free version with two worked examples. A distinction is m ade between planned com parisons o f traits and m ethods and arbitrarily chosen contrasts. T o aid interpretation in th e latter case, it is shown how to transform the estim ates to sim ple structure using blockwise principal com ponents decom position an d varimax rotation m ethods. T he next section gives the specifications an d identification conditions for the DPM. A worked exam ple of the dircct product m odel is in terp reted from both multiplicative an d linear viewpoints. T h e final section develops a com m on framework for all three models, discusses how they are nested within each other, and relates them to the Campbell and Fiskc validity criteria. Model specifications o f the worked exam ples, for the Amos (Arbuckle, 1995), LISREL (Joreskog 8c Sorbom , 1993) an d Mx (Neale, 1994) program s, are given in th e appendix.

CONFIRMATORY FACTOR ANALYSIS A m ong the several com peting m ultivariate m odels for MTMM matrix analysis reviewed by Schm itt and Stults (1986) and by Millsap (1995), virtually only the CFA approach has m aintained an appreciable following in the literature. In this section, two p ro m in e n t types o f factor m odels are evaluated: the trait-only m odel, outlined by Jorcskog (1966, 1971), which expresses the observed variables in term s of t nonoverlapping, oblique trait

10

VVOTHKF.

factois, an d die trait method factor mode! (cf. A lthauser & H ebcrlcin, 1970; Joreskog, 1971, 1978; W erts & Linn, 1970), which contains m additional m ethod factors to absorb m ethod com ponents o f m easurem ent. Instructions fo r specifying the two MTMM factor m odel structures have long been available (e.g.,Joreskog, 1971,1978). These apply equally to th e several avail­ able statistical estim ation m ethods, such as not mal-theory m axim um likeli­ hood (ML), asymptotically distribution-free (Browne, 1984b), and others that are nowadays available in com m ercially distributed statistical software. Exem plary analyses in this and the following sections are worked out with the Ain os (ArbuckJe, 1995) and LISREL (Joreskog & Sorbom , 1993) program s, with program in p u t displayed in the appendix. All exam ples use the com m on norm al-theory ML estim ation m ethod, an d fit is evaluated by the G' statistic,1 com puted as C2 = (w - 1) [In l±i - lnlSI + t r a c e d ' ) - ml],

(2)

where n is the sam ple size. U n d er m ultivariate normality, given the popu­ lation covariance structure £, G2 is asymptotically / ’-distributed with mt(mi + 1) dJ -

9

“ ?•

(3)

w here q is the n u m b er of independently estim ated m odel param eters. Factor analysis decom poses th e n x p data m atrix X of p m easures on n units into an n x k m atrix E o f a lesser num b er k o f latent factors: X = EA' + E,

(4)

w here A is the p x h m atrix of partial regression coefficients of observed m easures regressed onto the latent factors an d E is the m atrix o f unique com ponents assum ed to be uncorrelated with each o th er an d with the factor scores in E. Expressing the population covariance m atrices of X, E and E as , an d 0 , respectively, Equation 4 implies the covariance representation A

1.01) -.02 .27 .40 -.03 17

AM.Th

1.00 .01 .01 .76 -.0)

A L.Th_[ ADC.SD

1.00 .34 -.03 .44

1.00 00 .33

AM.SD

AL.SD

1.00 .on

1.00

2. MODELS FOR MTMM ANALYSIS

n

T A B I£ 2.2 Trait-Only Factor Analysis o f the Flamer A Data

Factor loading matrix Ar

Method Likcrl

Thurstone

SD

TVait ADC AM AL ADC AM AL ADC AM AL

Trait ADC .85 .00 .00 .84 .00 .00 .50 .00 .00

factors AM AL .00 .00 .77 .00 .00 .61 .00 .00 .80 .00 .62 .00 .00 .00 .94 .00 .71 .00

Uniqueness Estimates 0 .28 .41 .63 .29 .36 .62 .75 .12 .50

Factor correlat ions .

(20)

D ue to the removal o f red u n d an t param eters, * has two fewer rows and colum ns than . Yet, Equations 15 and 20 are equivalent— O* merely omits the term s that could n o t be estim ated in the first place.

WOTHKE

20

Topology o f Covariance Structures. M odel interp retatio n should distin­ guish entries in served variation attributable to these latent differences is estim ated by 4>*. T rait validation o r general m eth o d effects can l>e established to the degree that diagonal entries o f have large positive values. Conversely, w hen the estim ated variance o f a trait contrast is zero, the involved traits cannot be discrim inated on em pirical grounds. In anal­ ogy, m easurem ent m ethods can be said to be equivalent to each o th e r w hen the associated m ethod contrasts have zero variance. Interpretation o f covariance co m p o n en t estimates can be simplified by choosing K to be t/rthcnionnal, K'K •- I. T h en , because the colum ns of K are orthogonal, correlations derived from can be in terp reted directly and.

21

2. MODELS FOR MTMM ANALYSIS

consequently, in d ependence between trait and m eth o d com ponents can be easily tested. Second, because the colum ns of an o rthonorm al K are also norm ed to unit, length, all p aram eters in

(symm.) 4>’_ Tt &

(22) ‘

7 G eneral, trait, and m ethod variates sufficiently describe the covariance structure o f the m easured variables b u t are correlated with each other. Indepcndent-common-varialion: T he first row an d colum n show zero en ­ tries in the off-diagonal elem ents: (symm.) 0 0

^rr &M?

(23) ‘ 7

T rait variation an d m ethod variation arc in d e p en d e n t of the general variale, while trait contrast may still correlate with m ethod contrasts. Block-diagonal O*: Trait contrasts are uncorrelated with m ethod contrasts and, in addition, the general variatc is in d e p en d e n t o f b o th (rait and m ethod contrasts. \ (symm.) f a * o

=

0

*k ' +

(29)

d;

’0 d ; '

All three ratings of Broad Interests show com parable scale factors. Asser­ tiveness, Cheerfulness, and Seriousness have sim ilar scale factors for staff and team m ate rating m ethods, w hereas self-rating factors arc substantially smaller. Self-ratings of these personality characteristics are hardly equiva­ lent to ratings m ade by others. Interpretation o f in Table 2.8 is similar to describing a “norm al” covariance matrix. T he diagonal entries arc relative variance com ponents estim ated u n d er constraint (26). T he off-diagonal elem ents are covariances between individual-difference profiles and can be standardized in to cor­ relation coefficients. Because is block-diagonal, the trait profiles are independent of m ethod differences (cf. section on typology o f covariance structures), and, conse­ quently, any further exploratory treatm ent can be carried out independently w ithin each block. T he toolbox o f m ultivariate statistics offers several

28

WOTI IKK

exploratory methods suitable for transforming block-diagonal CCA solutions; among these, principal axes decomposition of the covariance com ponent blocks and varimax rotation arc well known and yield easily interpretable results. Blockwise Principal Axes Decomposition. This section proposes the prin­ cipal axes decomposition to transform the submatric.es and into diagonal form and, by producing orthogonal variates in the process, to simplify the interpretation of model estimates. The principal axes method was successfully introduced to the factor analytic field by Hotelling (1933). Eigenvalues and the corresponding (nonzero) eigenvectors q, of a symmetric matrix A arc defined as the roots of -H - qA

(30)

For a p x p matrix, there will be up to p distinct eigenvalues. Solutions of Equation 30 can be obtained by various numerical methods, many of which are implemented in commercially m aintained software libraries, for exam­ ple, IMSL (IMSL, Inc., 1980), LAPACK (E. Anderson et al., 1992) and the NAG library (Numerical Algorithms G roup, Ltd., 1987), or in mathematical programs such as SAS/IML (SAS Institute, 1985) and Mathcad (Mathsoft, 1992). For symmetric matrices, eigenvectors associated with different eigenval­ ues are orthogonal. All eigenvectors of A may be scaled to unit-length and assembled in the columns of the matrix Q, so that Q 'Q = I. By collecting the associated eigenvalues in the same order in the diagonal matrix Dx, Equation 30 can be written more compacdy as (31)

(32) It has becom e customary in principal com ponent analysis to rescale the norm cd eigenvectors Q of a correlation or covariance matrix into so-called principal axes coefficients P = QD[/2 (H arm an, 1967; Hotelling, 1933). Clearly, PIP' ~ QD^Q' = A (i.e., the P, are rescaled so that their underlying com ponents have unit variance, while the com ponents underlying the q* have variance A*). In covariance com ponents models, com putation of principal axis com­ ponents is applied to the product K«t>*K' and yields invariant results for full-rank choices of trait and method contrasts in the (tm) x (1 + (I - 1) + (m - 1)) matrix K. W hen the estimate of ®:,: is block-diagonal, the covariance com ponents can be partitioned into additive general, trait, and method com ponents as:

29

2. MODELS FOR MTMM ANALYSIS

(syinm.) 0

K O K = (KIKJK,,)

0

K 0

\ - V

^

> O'w* /

+ K ^ K '. + K ^ K V

Kt El K',

(33)

(34)

w here the subscripts g, X an d p refer to the general, trait, an d m ethod parts o f the m odel, respectively. Separate eigenstructures have to be com puted for the trait co m p o n en t K 't and the m ethod co m p o n en t Blockwise Varimax Rotation. Analytic rotation o f the principal axes co­ efficient blocks P, and may be em ployed to rotate the estimates to sim ple structure. A ro tated solution com prises the sam e num eric inform a­ tion as an u nrotated principal axes representation, an d thereby, as the original covariance co m ponent solution (Equation 33). Let T be an or­ thogonal rotation m atrix, with T T ' - I, and L, - P :T be a ro tated matrix. Any such L, is equivalent to P, as a decom position of K)

Worked Example Lawler’s (1967) MTMM matrix of managerial jo b perform ance is used as an example. The sample correlation matrix is given in Table 2.10. Three traits are postulated: quality of jo b perform ance, ability to perform on the job, and effort put forth on the job; each trait is assessed with superior, peer, and ^ r a t i n g methods. Convergent validity am ong ratings by superiors and peers is shown by large entries in die first validity diagonal and by similar correlation patterns in the heterotrait triangles. Self-ratings, on the other hand, do not converge with ratings m ade by others—entries in the related validity diagonals are small and sometim es exceeded by correlations in the same heiei omethod block.:l It appears that discrim inant validity of the measures is not clearly established: Sizable entries in all m onom ethod blocks suggest the presence of m ethod variance. *l..ack o f convergence between self-ratings and ratings by others has also bee11 observed in other studies; a well-known example is the Kelly and Fiske assessment data discussed previously.

35

2. MODELS FOR MTMM ANALYSIS TABLE 2.10 Lawler's (1967) Managerial jo b Perform ance Data (A '= U S )

QualStipo AbilSupe EfFtSupe QiialPccr AbilPeer EtftPeer Qua! Self AbilSelf EfftSulf

Su.Q

S u .A

Su .E

1.00 .53 .56 .65 .42 .40 .01 .03 .06

1.00 .44 .38 .52 .31 .01 .13 .01

1.00 .40 .30 .53 .09 .03 .30

P .Q

1.00 .56

,r>r> .01 .04 .02

P. A

P .E

Se.Q

Se.A

S e.E

1.00 .40 .17 .09 .01

1.00 .10 .02 .30

1.00 .43 .40

1.00 .14

1.00

Note. Reprinted with permission.

M aximum likelihood estim ates for the direcl-pioducl m odel (see Equa­ tion 40) with com posite errors are displayed in T able 2.11. Identification conditions arc satisfied using the four equality constraints (C(1) u, = (E^)u = 1 and r r , 1.00 .51 .00 .00 .00 .00 .00 .00

1.00 .00 .00 .00 .00 .00 .00

(symmetric) 1.00 .69 .66 .00 .00 .00

1.00 .51 .00 .00 .00

1.00 .00 .00 .00

2. MODF.LS FOR MTMM ANALYSIS

37

All off-diagonal entries in ihese co m ponent m atrices have t values larger than 2. T he elem ents o f f l|t and n t are the multiplicative com ponents o f cor­ relation coefficients corrected for attenuation an d are consequently larger than the correlation coefficients in T able 2.10. T h e two m atrices clearly reflect trends in the sam ple correlations. Off-diagonal entries iri I],, indicate changes in MTMM m atrix patterns relative to the m on o m eth o d blocks. For instance, the value o f .72 in n,, signifies a 28% d rop in the size o f correlation coefficients when o n e trait is m easured by supervisor ratings and the oilier by peer ratings. T he num bers .21 and .20 in f l(, similarly indicate that correlations are reduced by 79% or 80% when supervisor o r peer ratings are com pared to self-ratings. Off-diagonal entries in II, express m agnitude changes relative to validity diagonal entries. T he value o f .69 describes the average disattenuated cor­ relation between quality and ability ratings as 31% smaller than the (disat­ tenuated) diagonal entries. Correspondingly, the correlation between qual­ ity and ejforrt is 34% smaller, and the correlation between ability and ejforl is 49% smaller. ju s t as in the case with the C Q \ m odel, the fit of the DPM assures neither convergent nor discriminant validity. Convergent an d discrim inant validation require an asymmetric evaluation o f the two structural matrices: II,, should app ro ach a matrix of u n it entries while the off-diagonal elem ents in n , should approxim ate zero. This is n o t the case with the I-awler data: The average off-diagonal entry is .38 in an d .62 in ft,; these two figures point in the exact opposite direction from where they should be, suggesting poor discrim inant validity. Linear Interftretatiim. T h e linear param eterization (see E quation 40) o f the DPM provides a m atrix of disattenuated correlation com ponents am ong traits (II,) and a m atrix (C ,j o f m ethod-specific weight factors. While interpretation of II, rem ains th e sam e as in th e previous section, entries in C,, arc order-dependent. Analytic rotation m ethods such as Kaiser’s (1958) varimax rotation criterion may again be em ployed to obtain an order-independent solution, and the rotated m eth o d co m p o n en t loadings can be interpreted as regression coefficients o f observed variables o n to orthogonal latent variates similar to traditional exploratory factor analysis. Varimax rotated m ethod loadings for the Lawler (1967) jo b perform ­ ance data are given in T able 2.12. T h e factor structure shows three com ­ ponents, the first o f which affects superior ratings, the second peer ratings, an d the third g r a t i n g s . All three m ethods lead to fairly specific m easure­ m ents, most o f all the self-ratings m ethod. .Superior and peer rating m easures do not diaw inform ation from the third factor, b u t share a m oderate am ount o f com m on variance with each other, indicated by th e two factor loadings o f 0.38.

38

WOTHKE TABLE 2.12 Varimax Rotated M ethod C om ponents for Lawler Data

Superior P eer Self

.92 .38 .08

K ■38 .92 .07

.10 .09 .94

T he structure of the rotated loading matrix LMcorresponds closely to the substance of the Campbell and Fiskc criteria. If trait m easurem ent is virtually independent of the m ethods used, only one column of L„ (or Cp) should show large entries, and the estimates in all other columns should vanish. Such a solu tio n im plies a co n g en eric m easu rem en t m odel (Joreskog, 1966, 1971), as discussed elsewhere. Conversely, large entries in any additional column of indicate independent sources o f m ethod variation that would likely lead to considerable violations of the Campbell and Fiske criteria. In the case of the Lawler data, all three m ethods show a substantial degree of independent variation; .^ r a tin g s arc found to have the smallest overlap with the other methods. Conclusions Regarding the DPM T he DPM (Browne, 1984a; Cudeck, 1988) presents a relatively novel, mul­ tiplicative model for M'l'MM correlation matrices. The modeling principle o f the direct product approach is that crossing two different m ethods a and b will attenuate die trait correlation pattern II, by a constant factor [0 S Tt,MC)(1,4l < 1), relative to die m onotnethod correlations. Corre­ lation patterns in the different heterom ethod blocks must therefore be proportional to each other. Results may differ from covariance com ponent and factor-analytic approaches that represent m ethod effects by adding constant terms to the covariance patterns in the heterom ethod blocks. U nder the DPM, scale factors and error term s of the observed variables may be fixed, free, or restricted to suitable patterns. Browne’s (1984a) form ulation suggests that individual scale factors are estimated for all trait-m ethod units. This ensures that model estimates are scale-free, applying equally to covariance or correlation matrices. The variances of m easurem ent errors may either be generally heteroscedastic, leaving the uniqueness coef­ ficients essentially unconstrained, or follow a composite error model, reflecting trait and method components in the product E = E,, Ej (Browne, 1984a). Wodike and Browne’s (1990) reparam eterization relates the DPM to second-order factor analysis. T he resulting linear (re)form ulation of the DPM perm its interpretation of the same m odel param eters from linear as well as multiplicative m odeling viewpoints. Although the multiplicative framework focuses on the relative attenuation o f correlation structures

2. MODF.I.S FOR MTMM ANALYSIS

39

across trails or m ethods, a linear viewpoint may likewise be entertained, interpreting the estimates in terms of factorial com ponents that reflect, independent sources of variation. Both linear and multiplicative approaches to MTMM analysis are not only accom m odatcd und er the DPM, but there is also a one-to-one translation between the respective sets o f estimates. The difference between these two modeling approaches is therefore merely a matter of conceptual style.

DISCUSSION Because different structural assumptions for the MTMM matrix are incor­ porated in the factor-analytic, as well as CCA and direct product models, the question often arises which of these is best. Some comparative remarks are therefore in order. W hen selecting a formal model for empirical data, o r a larger theory for that matter, consideration should to be given to aspects o f model parsimony, statistical fit, statistical power, and conceptual mcaningfulncss of model param eters. The next section discusses model parsimony and the known conceptual relationships am ong the different models. Exact statistical power analysis of multivariate models is a somewhat m ore com­ plicated issue, and is not addressed in this chapter. As a general rule, however, m ore parsim onious models also tend to be more powerful. Con­ versely, models usually lose statistical power when many additional parame­ ters are included. Since loss of power implies higher costs in terms of increased sample size requirem ents a n d /o r higher Type II erro r rales, it is advantageous to keep the num ber of conceptually uninteresting (or nuisance) param eters small. Fit statistics and param eter estimates of the models can be interpreted informally, as dem onstrated by worked examples in the preceding sections, or a m ore formal interpretation in terms of validity criteria may be at­ tempted. The later part of this section outlines the ways in which the models generalize the Campbell and Fiske criteria. Relation Between Factor Analysis, CCA, and DPMs T he param eter counts for standard versions of the MTMM models, and for two special models with additional constraints, are shown in Tabic 2.13. Among the standard models, trait-only factor analysis has th e least num ber of parameters, whereas the block-diagonal trait-m ethod factor model has the most parameters, with mt+ m ( m - l ) /2 additional terms for the m ethod factor structure. Between these two extrem es are the block-diagonal CCA and the DPM (with unconstrained error terms) approaches, both of which

40

WOTIIKE TABLE 2.13 Parameter Counts o f Scale-Free MTMM M odels

a.

b.

Mode) Standard models Trait-only factor analysis Block-diagonal CCA DPM with unconstrained errors Block-diagonal trait-method FA Models with additional constraints Diagonal OCA D fM with composite errors

Number of parameters 2m t 2m* + ( ( i 2m t -f t(t 3m t + l[t -

+ t(t 1)/2 + l)/2 + l) / 2 +

l) /2 m ( m - l) /2 m {m - l) /2 m (m - lj/2

2m t + t + m - 2 m t + t(t + l ) / 2 + m(ro + l ) / 2 - 1

have m( m - l ) / 2 m ore param eters than th e trail-only factor m odel and mt few er param eters than block-diagonal Irait-m ethod factor analysis. The nesting relationships am ong these four standard m odels will be discussed. T h e two m odels with additional constraints establish interesting variants o f scale-free MTMM models, but are singled out in T able 2.13 because they clearly d o not form a hierarchical relation with th e factor-analytic ap­ proaches. Diagonal CCA (illustrated earlier) contains the sam e 2 ml param e­ ters for scale factors an d e rro r variances as block-diagonal CCA., b u t ( m 1) ( m - 2 ) /2 + (t - 1) (I —2 ) /2 covariance com ponents are fixed at zero so th a t only th e m + t - 2 variance com ponents o f th e trait and m eth o d profiles have to be estim ated. T h e DPM with com posite erro rs (dem onstrated earlier) uses only t+ m - 1 param eters to describe the uniq u e variance com ­ ponents instead of the ml param eters used by the unconstrained error model. F u rth e r constraints are possible. Especially w hen strict assum ptions abo u t the scale of m easurem ent a n d /o r the size of erro r variance can be en tertained, then som e o r all o f the mt scale param eters can be fixed. A worthwhile advantage o f vising fixed-scale m odels would be m odel parsi­ m ony as well as their considerably hig h er statistical power com pared to scale-free models. However, scale constraints can only be applied to covari­ ance matrices; they ca n n o t be used to m odel correlation m atrices that have traditionally been studied with MTMM designs. Relation Between Factor Analysis and CCA. Trait-only factor analysis is a m ore restricted form o f CCA. Specifically, th e trait-only factor m odel is equivalent to a (scale-free) block-diagonal CCA m odel with th e m( m l ) / 2 param eter m ethod covariance block JP fixed at zero. T h e equivalence can be dem onstrated by a one-to-one m apping of die m odel param eters. N ote that, with *„ in th e CCA m odel.

41

2. MODF.IS FOR MTMM ANALYSIS

C onsider the trait-only factor contributions defined by Equations 6 and 7. Since the factor loadings in AT are nonoverlapping, the m atrix can be expressed as AT= D y( l „ ® I T).

(47)

D efine 1(1 ® C. as a rank t contrast m atrix for the t traits in the m odel, including o n e general variate and t - 1 trait profile com parisons. Ct has full rank. H ence, the relation between factor covariances t and covariance com ponents ; can be written as V- - C D ;c v

(48)

T hen, by repeated application o f the elem entary matrix identity AB ® CD = (A ® C )(B ® D), the covariance m atrix o f the factor m odel is 2 = A-OrA, + (-)

(49)

= D ,(l11® I I)Or( l 'p® I t)D ,+ 0

(50)

= D ,(W ® < I> ;)l> ,+ e

(51)

= D , ( W ® C,;C't)D, + 0

(52)

= Dx(lj, ® C tW (lV ® C'i)D,, + 0

(53)

= R ,R 4 S K \D ,+ 0 ,

(54)

yielding a covariance co m ponents form ulation. Identification rules for scale-free CCA (discussed previously) require that the first row an d colum n of be fixed at suitable values (e.g., ij/i.i = 1 and = V i.i= 0 f ° r 2 *■ !)• T hus, param eterization (see E quation 54) is the block-diagonal scale-free covariancc com ponents m odel, om itting all term s for m ethod covariance structures. T he trait-m cthod factor m odel is m ore general than the scalc-frcc blockdiagonal CCA. This can be shown by equating the CCA m odel with the following restricted version o f trait-m ethod factor analysis: As discussed earlier, the trait-m ethod factor is generally underidentified w hen the load­ ing matrix has the proportional structure (see E quation i l )

(55)

w o n IKK

42

using ihe ml x (m + t) design m atrix A,”,, . D efining T - D.ODj yields the covariance structure ^ = d a ; w

; d +0

(56) (57)

T he apparent problem is th a t X'-v has only colum n rank m. + / - 1 so that th e param eters in 'I' cannot all be uniquely estim ated. T o resolve the iden­ tification problem , select an m ix (m + I - 1) m atrix K.,. o f colum n contrasts of A;„ , so that A.’^ = Kl(1Lw. Define V* = L,(1'PL .|1 an d apply the identifying constraints of the section on m odel identification for CCA to *P*, yielding the constrained covariance co m ponent m atrix *. H ence, the m odel covariance matrix represented by the trait-m ethod factor m odel with pro p o r­ tional loadings (sec Equation 11) is: (58) again, a CCA form ulation. Identification conditions for scale-free CCA es­ tim ation require that the first row an d colum n o f O* have fixed values, such as (j),', = 1 and zeros in th e o th e r fields. F urtherm ore, because the original factor covariance matrix was block-diagonal, the submatrix -308. Graybill, F. A. (19(31). An introduction to linear statistical models (Vol. 1). New York: McGraw-Hill. Grayson. F)., 8c Marsh, H. W. (1994). Identification with deficient rank loading matrices in confirm atory analysis: M ultitniit-m ultim ethod models. Psychometrika, 59{ 1), 121-134. H arm an, H. H. (1967). Modem factor analysis (2nd ed .). Chicago: University o f Chicago Press. H otelling, H. (1933). Analysis o f a com plex o f statistical variables into principal com ponents. Journal of Educational Psychology, 24, 417-441, 498-520. 1MSL, Inc. (1980). The IMSL library (Edition 8 ). H ouston, TX: A uthor. Joreskog, K. G. (1906). Testing a simple structure hypothesis in factor analysis. Psychometrika, 31, 165-178. Joreskog, K. G. (1971). Statistical analysis o f sets o f congeneric rests. Psychomdrika, 36(2), 109-133. Joreskog, K. G. (1974). Analyzing psychological data by structural analysis o f covariance matrices. In D. H. Krantz, R. C. Atkinson, R. D. Luce, 8c P. Suppes (Eds.), (Contemporary developments in mathematical psychology (Vol. 2). San Francisco: Freem an. Joreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. Psychomelriku, 43(4), 443-477. Joi'eskog, K. C»., & Sorborn, D. (1979). Advances in factor analysis and structural ecfuation models. Cam bridge, MA: Abt Books. Joreskog, K. G., 8c Sorbom, D. (1989). LISRFI. 7. A guide to the program and applications (2nd erl.). Chicago, II.: SPSS. Joreskog, K. G.. & Sorbom, D. (1993). I.1SRF.L 8. Structural equation modeling with the. Simplis command language. Chicago, IL: Scientific Software International. Kaiser, H. F. (1958). T h e varimax criterion for analytic rotation in factor analysis. Psychometrika. 23, 187-200.

2. MODELS FOR MTMM ANALYSIS

55

Kalleberg, A. L., & Kluegcl, J. R. (1975). Analysis o f th e m ultitrait-m ultim ethod matrix; Some lim itations a n d an alternative. Journal of Applied Psychology, 60(1), 1-9. Kelly, E. 1.., & Fiske, D. W. (1951). The jrredir.lion of performance m clinical psychology. Ann Arbor: University of Michigan Press. Kenny, D. A .r & Kashy, D. A. (1992). Analysis o f the m ulcitrait-m ultim ethod m atrix by con­ firmatory factor analysis. Psychological Bulletin, 1 1 2 (\), 165-172. Lawler, E. E., III. (1967). T he m ultitrait-m ultim ethod approach to m easuring managerial jo b perform ance. Journal of Applied Psychology, 5 /(5 ), 369-381. Lord, F. M., 8c Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: AHdison-Wesley. Marsh. H. W. (1989). Confirmatory factor analyses o f m ultitrait-m ultim ethod data: Many problem s and a few solutions. Applied Psychological Measurement. 13(4), 335-361. Marsh, H. W., & Bailey, M. (1991). Confirm atory factor analyses o f m ultitrait-m ultim ethod data: A com parison o f alternative models. Applied Psychological Measurement, !5( 1), 47-70. M arsh, FI. W., Balia, J. R„ 8c McDonald, R. P. (1988). Goodness*of-fit indexes in confirm atory factor analysis: T h e effect o f sample size. Psychological Bulletin, 103(3), 391-410. Mathsoft, Inc. (1992). Mathcad 3.1. User's guide. Cam bridge, MA: A uthor. McDonald, R. P.. & Marsh. H. W. (1990). Choosing a multivariate model; N oncentrality and goodness o f fit. Psychological Bulletin, I07{2), 247-255. Millsap, R. F.. (1992). Sufficient conditions for rotational uniqueness in th e additive MTMM model. Brilish foum al of Mathematical and Statistical Psychology, 45, 125-138. Millsap. R. E. (1995). T h e statistical analysis o f m ethod effects in multicrait-multi m ethod data: A review. In P. E. S hro u t & S. T. Fiskc (Eds.), Personality research, methods, and theory: A festschrift honoring Donald W. Fiske. Hillsdale, NJ: Lawrence Erlbaum Associates. Neale, M. C. (1994). Mx: Statistical modeling (2nd ed.). R ichm ond, VA: A uthor. Numerical A lgorithms C roup, Ltd. (1987). NAG FORTRAN libraries Mark 12. O xford, GB: Author. Rindskopf, D. M. (1984). Using phantom and imaginary latent variables to param eterize constraints in linear structural m odels. Psychomelrik/t, 49, 109-119. SAS Institute Inc. (1985). SAS/IML, user's guide fo r personal computers (Version 6 ). Cary, NC: Author. .Schmitt, N. (1978). Path analysis o f m ulutrait-m ultim ethod matrices. Applied Psychological Measurement, 2(2), 157-173. Schmitt, N„ 8c Stults, D. M. (1986). M ediodology review: Analysis o f m ultitrait-m ultim ethod matrices. Applied Psychological Measurement, 13. Spearm an, C. (1904). G eneral intelligence, objectively determ in ed an d m easured. American Journal o f Psychology, 1.5, 201-293. Stacy, A. W., W idaman, K. F., Hays, R., & DiMatteo, M. R. (1985). Validity o f self-reports o f alcohol and o th e r drug use: A m ultitrait-m uU im ethod assessment. Journal of Personality and Social Psychology, 49(1), 219-232. Swain, A. j. (1975). Analysis of parametric structures f o f and two observable indicators *, and x, of such that \

o'

_11_ K:, 0 4. 8, 0 ** 'h ^2 *4 0 \ *4 8, \ / V \ / Kenny and Ju d d did not include the constant in tercep t term s a and T, in E quations 1 and 2. T he usual argum ent for leaving these o u t— that one can work with the observed variables in deviation scores from their m eans— is not valid here. T h e point is that even if an d C, in (1) all have zero m eans, a will still be nonzero. As is seen later, the m eans o f the observed variables are functions o f other param eters in th e m odel an d therefore the intercept term s have to be estim ated jointly with all the o th e r param eters. T o begin with, we make the following assum ptions 1.

and

are bivariate norm al with zero m eans

2. c ~ m v )

3. 6, ~ .V(O,0,), *= 1,. ..,4 4. 8, is in d e p en d e n t o f 5, for

i

^ j

5. 8, is in d e p en d e n t o f ^ for

i

= 1 ,.. „ 4 an d j =1,2

6. ^ is in d ep en d en t o f 821 A.A^iO,|

022 1 0 *4022

°« l

°63

°64

°61

(9)

Ov,

where Or,i -

i + i=sY A i + t i 7 i ^ i + V / ; f « + T*(«)»

( 10)

^1^1 + X30|,

( 11 )

^61 —^3*24*11 + ^1*2^21 >

( 12 )

°f>2 = X3,l

(13) (H)

3. THE KF.NNY-jllDD MODF.I.

o.«, =

61

+ n^ + x'fe, + x'^e, + ]l,(l»21, L Y < 3 ,1 ) L Y ( 5 , 2 )

FR P S ( 1 , 1 ) - H S (2 ,2 ) F R LY ( 1 , 4 ) L Y ( 2 , 4 ) VA

1

t.Y ( 2,

3)

LY

L Y < 3 .4 ) L Y < 4 ,4 ) L Y ( 5 , 4 )

( 4 ,2 }

L Y ( 6 , 3 ) C A {4 )

CO L Y (6 r 1 ) —L Y ( 4 , 4 ) CO L Y ( 6 , 2 ) = L Y ( 2 , 4 ) CO l . Y ( 6 , 4 ) = L Y ( 2 , 4 ) * L Y ( 4 , 4) CO P S (3 , 3 )= P S ( 1 / 1 ) * P S ( 2 , 2 )+ P S ( 2 , 1 ) * * 2

70

JOkJiSKOG AND YANG

CO T E ( 6 , 2 ) - L Y ( 4 , - 1 ) * T E ( 2 , 2 ) CO T E ( 6 , 4 ) = L Y ( 2 , 4 ) * T G ( 4 , 4 ) CO T E ( 6 , G ) = L Y C 2 , 4 ) ' * 2 * T E ( 4 . 4 ) +L) f ( 4 , 4 ) * * 2 * TE ( 2 , 2 ) +C PS(1,1)*TE(4,4>+PS(2,2)*TB(2,2)+TE(2,2)*TE(4,4) CO GA( 3 ) = l ’S ( 2 , 1 ) ST

. 5 AL L

S T 0 LY ( 2 , 4 ) L Y ( 3 / 4 )

LY('l.'l)

LY ( 5 , 4 >

ST 1 LY( 1 , 4 ) OU S O NS At t e OKF D F = - )

Since *P is singular, one m ust put AD=OFF o n the OU line. T h ere is no rcfercncc variable for the fou rth factor, so starting values m ust be provided and NS must be specified on the OU line. It is also a good idea to p u t SO on the OU line to tell LISREL not to check the scales for the factors. Bccause of the fixed variable 1, LISREL automatically reduces the degrees of freedom by one. T he DF= -1 on th e O U line reduces the degrees o f freedom fu rth er by one as required since th e rank of W, is 8 . T H E KENNY-JUDD MODEL W ITH FO U R PR O D U C T VARIABLES Ketinv an d Ju d d (1984) suggested using four product variables instead o f only one. As stated previously, this is not ncccssary because th e m odel is identified with ju s t one p ro d u ct variable. T h e addition of th ree m ore p ro d ­ uct variables ju s t adds many m ore m anifest param eters w ithout ad d in g any new param eters to estim ate. So it would seem that o ne gains degrees of freedom and that the m odel therefore is m ore parsim onious. However, th ere is a price paid for this. First, the augm ented m om ent matrix A is now of order 1 0 x 10 an d the corresponding estim ated asymptotic covariance matrix W, o f a has a rank deficiency o f 10 rather than 2. Second, the more p roduct variables used, th e larger A will be an d th e larger the sam ple size required to estim ate W„ with sufficient precision in term s of statistical sam­ pling error. Recall that W„ contains several m om ents o f eighth o rd er of the original variables. Il is also m ore difficult to com pute W~ accurately for large A matrices. T hird, using highly singular weight matrices in LISREL increases the possibilities of m ultiple local m inim a o f the fit function. Also WLS is m ore problem atic with fo u r product variables for it turns out that, with four product variables, W in Equation 19 is singular with a rank deficiency o f 1. It is a conjecture that this holds generally when there are four pro d u ct variables. We have found this to be the case in all m atrices W we have exam ined. If W is singular, o n e m ust replace W 1 in Equation 17 by the M oore-Penrose generalized inverse W . With fo u r product variables, the LISREL im plem entation of ML and WLS uses the following specification:

3.

71

T H F . K F N N Y - j n n n M O D F .I.

(

\

a

' y '

'

X,

Xt

*s

% +

= *1*1

*1*4 *J*1 *!*4 \

h h

V4 X2X, X2X4

\

/

Yi

72 0 0

7s > 0 0 1 0 k, 0 % X, 1 X4 X|X4 X.4 Xj Xg T4?U. ^2^4 1 X2 0 0

*i

\

5,

8, ^3

M s*

84

+

S5

s ,s .

8, 87 8«

V /

/

\

V 0 0 0 0* = 0 0 0 0 0

0, 0 0 0

0? 0 0

6s 0

0

tA

(34)

»4 0

%

1*04

0

0

*.03 0 X,04 e * 06 x30 2 X„05 0 ^7,s 0 e 7

0

x,e.

X40,

0

0

0K(, 0»7

w here ®s» = V « 0 i + M te ® i.

«7S= 1,TA + U , dJ-

^ =v A + W * > e ,= x^3+ x*o, + ,1e4+

+e ie- 2*KS(J+.4*KSI2+.7*KSI l *KSt 2+. 447214*NRAND NE Xl=KSIl4-. 71414 28*NRAND NE X2=. 6*KSI1+.8* NRAND NE X3=KSI2-K6*NRAND NE X4=. 7 *KSI2+.7141428*NRAND NE X1X3=X1*X3 NE XlX4=Xl*X4 NE X2X3=X2*X3 NE X2X4=X2*X4 SD KSI1 KSI2 00 IX=123 MA=CM ME=KJ2. ME CM=KJ2. CM ACC=KJ2.ACC XM

generates data corresponding to the following param eter values: Yj = 0.2 72 = 0.4 7, = 0.7 n = 0.49

= 0.2352

= 0.6 X, = 0.7 = 0.64 y = 0.20

e ; = o . 5 i e * = o .6 4 e s = o .3 6 e 4 = 0 .5 1

a = 1.00 Tj = 0.00, i= 1,23,4. NRAND is a normal random deviate. IX is a seed for the random num ber

generator. By putting R A = K J2 .R A W on the OU line, one can save the generated raw data on 800 cases. However, this is not necessary as one can compute and save the covariance matrix S directly, as done with this command file. The mean vector z and the asymptotic covariance matrix W can be obtained at the same time. To obtain the augm ented m om ent matrix A and the corresponding asymptotic covariance matrix W„ instead, run the same command file again with the same seed and the following OU line: OU IX=123 MA=AM AM=KJ2AM AC=KJ2. ACA XM

T he file K J2 . CM contains the covariance matrix S o f order 9 x 9 and die file K J2 . ACC contains the corresponding asymptotic covariance matrix W of order 45 x 45. The file K J2 .A M contains the augm ented m om ent matrix A of order 10 x 10 and the file K J 2 . ACA contains the corresponding asymptotic, covariance matrix of order 55 x 55. These files all include four product variables. To analyze the model with only one product vari­ able, include the lines:

Y XI X2 X3 X4 X1X3 /

3. THE KENNY-JUDD MODEL

75

in the input file for ML and WLS, and the lines

y XI X2 X3 X4 X1X3 CONST /

in the input file for WLSA. To obtain the generalized inverse W~ , use the following command line: GINV KJ2.ACA KJ2.WMA

W~ will be stored in the binary file KJ2 .WMA. The 55 eigenvalues of Wrt

are given in the file OUTPUT produced by GINV as follow's: Eigenvalues

o t ACA M a t r i x : 1 2 3 4 5 6 0. 3198780+02 0.217726D+02 0 . 1 2 3 4 37D+02 0.667464D+01 0. 514834EH-01 0. 421733CH-01 7 8 9 10 11 12 0. 389412D+01 0 . 305706D+01 G.250682D+01 0.243333D+01 0 . 236390D+01 0 . 190726D+01 13 14 15 16 17 18 0 . 177064D+01 0. 168861 CH-01 0.158013D+01 0.135929D+01 0.134610D+01 0. 11 88 43 1* 01 19 20 21 22 23 24 0 . 1 0 7 4 1 4 D-t-01 0 . 84 7790I>M)C 0. 7 7 6 9 39D+00 0. 769930D+00 0. 713555D+00 0. 615735D+00 25 26 27 28 29 30 0. 54 6278C+00 0 . 4 7 4 596D+00 0. 453407D+00 0. 406390D+00 0. 377083D-*00 0. 327319D+00 31 32 33 34 35 36 0. 286704EH-00 0.279941D+00 0.240174D+00 0. 221 9 35D+C10 0.197S86D+00 0 . 16256SD+00 37 38 39 40 41 42 0 . 1 3 46 470+00 0 . 1 1 3488LH-00 0 . 9 9 1 13 2 0- 01 0.7G9B31D 01 0 . 6 1 6 8B1D-01 0. 548982D- 01 43 44 45 46 47 48 0 . 406724D- 01 0. 349 5 5 5 D- 01 0 . 2 7 7 7 3 0 D 0 1 0. 2244 45 D- 15 0 .1 9 2 6 7 5 D - 1 5 0. 996 38 2D- 16 49 50 51 52 53 54 0 . 9 1 5 0 6 7D-16 0 . 5 2 0 4 1 7D- 17 0 . OOOUOOD+OO-O. 5 1 4 9 0 6 0 - 1 6 - 0 . 1 0 0 8 4 3 D - 2 5 - 0 . 105927D-15 55 - 0. 30 79 8 1D- 15 45 o u t o f 55 e i g e n v e c t o r s r e t a i n e d

Only 45 of the 55 eigenvalues are positive; the remaining are zero within machine capacity. This agrees with the statements made earlier in the discussion on rank deficiency. The results using one and four product variables and the three different methods are given in Tabic 3.1. This shows that the methodology described in the previous sections work. Although only some of the parameters of the model are included in the table, all parameter estimates are very close to the true values. The model fits the data very well as judged by all the chi-squares presented. The standard errors are smaller for Model 2 than for Model 1, but they are very similar across methods. The table might suggest that chi-squares are underestimated particularly for methods ML and WLS. No such conclusion can be made on the basis of a single sample,

76

JORESKOG AND YANG T A B U . 3.1 Artificial Dala {/V = 800) Model 1: O n e Product Variable

Parameter a Vi y-i 7s V

Tm ? Value

M l.

WLS

W ISA

1.0 .2 .4 .7 .2 .6 .7

1.02 (.04) .22 (.06) ■36(.05) ,6 6 (.]2 ) .20 (.04) .59 (.09) .72 (.06) 1.68 9 1.00

1.02 (.07)

.9 9 (0 4 ) ,17(.06) .44{.05) 72(.15) .17(.09) .57(.10) .74 (.06) 2.34 8 .97

-

r

-

/'

.23 (.07) .36 (.05) .65 (.12) .21 (.04) .6 0 (1 0 ) .72 (.06) 1.76 9 .99

Model 2: Four Product Variables PummtUr a ?l 72 Is V *■2 r rf/ p

True Value

ML

WLS

W ISA

1.0 .2 .4 .7 .2 .6 .7

1.02 (.03) .22 (.06) .517 (.04) .64(.07) .21 (.03) ,57(.05) .71 (.04) 15.61 36 1.00

1.03(.06) 24(.06) .36(.04) .65(.08) .19(.05) ,59(.05) .75 (.05) 12.27 35 1.00

1.05(.03) 19(.05) .43(.04) .63(.09) .1 8 (0 8 ) ,52(.07) .78(.05) 27.38 27 .44

-

-

however. N or can anything be said about the accuracy o f standard erro r estimates. Yang (in prep.) is investigating these questions in a forthcom ing dissertation.

BAGOZZrS BEA DATA An exam ple o f a theory involving interactions am ong theoredcal constructs is the theory o f reasoned action in social psychology (Ajzen & Fishbein, 1980; Fishbein & Ajzen, 1975). See Bagozzi, Baum gartner, an d Yi (1992) and references therein for its applications in consum er research. In essence, this theory states that beliefs an d evaluations interact in d eterm in in g attitudes. Data for this m odel was kindly provided by Bagozzi. It consists of: ♦ two m easures o f beliefs that coupon use would lead to savings on the grocery bill: 7-point likely—unlikely scales

77

3. THE KENNY-JUDD MODEL

* two measures of evaluations of conscquenccs of using coupons for shopping; 7-point good-bad scales « three measures of overall attitude toward coupon use: 7-point favor­ able-unfavorable scales The data comes from a survey of 253 female staff mem bers at a large public university in U nited States. T he questionnaire item s are described in Bagozzi et al. (1992). Wc use the term Bagozzi’s BEA data as a working nam e for this data (BRA -- Beliefs, Evaluations, Attitudes). Writing t| = at = attitude, £ , - be - belief = vl — evaluation, the expectancy value model is a t - a + y , be + y9v t + y^bevl + ij,

or in LISREL notation 1 = a + Y & +Y s£* + Yi& + C

(37)

T here are three indicators A A \, ,4.42, AA3 o f at, two indicators BEl and BE2 of be, and two indicators VL1 and VL2 of vL In LISREL notation these correspond to y , respectively. T he model will be estimated with one single product variable Bid I'Ll = and with four product variables BE\VL\ = x ,B E lV L 2 - x^x^BElVLl - xtx1l,BE2VL2 - x.jx4. A path diagram is shown in Fig. 3.1. The model differs from the Kenny-Judd model only in that there are three y-variables rath er than one. For the model with four product variables, the LISREL specification for ML and WLS is:

V y-2

\

'■df

=

(\\

(e \

X5»>

n+ e2 ]

' 1 x2

0 0

0 0

0 0

1

0

*4

0 1

*4 *4 + x,% = V * X,x4 V 4

*1 % t,X4 t,X.2 **

Vs ^*4

VV * /

*4

'1 . 1 S. 1,1,

K

\

x = r + A£+6 The parameter matrices

k

and O are as in Equation 28 and 0 f = diagf&pfiWfi™),

79

■i. THE KENNY-JUDD MODEL

0, 0 0 0

0v 0 0

x30.

0

03 0

04 *|03 0 05 x,e, 0 0 x,04 065 0H 0 x3 e2 x2e, 0 075 0 07 0 x4 e.> 0 XA 0 0*7 0U

where 0e„07VOK(„087,0^9,1,G7)01, arc as before. The parameter a in Equation 37 and the three Xs in Equation 38 are not identified. One can add a constant to the three xs and subtract the same constant from a without affecting the mean vector of y. (The (x in Equation 1 is the sum of X for y and CC.) Hencc, we may set a = 0 in Equation 37 and estimate the xs in Equation 38. The estimates of the xs will not be equal to the means of the ^variables. For WLSA, the corresponding LISREL specification is 0

r\

y\ h h

0 0

0 0

*, 0

0 0 0 1

0 0 0 0

0

^4

X 0 0 I

0 0 0

*3

0 0

*4 *1*3 *1*4

*3

*7*4

r£l) e2

xi/ x1 /1 x8

e3

s, 82

1. 1. 1,1* 1i

*3 0 *4 1 Vs

0 *4 X,?.4 0 XSA*J x* 0 V»2 V m

X.jX$

X is not significant with any of the methods. For the model with four product variables all three solutions are rather different. In particular, it seems that the ML solution is very different from the UW'LS and UWLSA solutions that are more close. T he I value for 7, is significant at the 5% level for both UWLS and UWLSA, although the t value for LTWLSA is a border line value. This is a reflection of the greater power obtained by using four product variables instead o f one. T he chisquares differ considerably between methods. Pretending that the UWT.SA chi-square value o f 94.09 with 52 degrees of freedom is correct, this has a p value of 0.00032. To test the normality of (^.,9^) we relax the constraints on in Equation 35 and reestimate the model. This gives X2 - 80.76 with 49 degrees of freedom. The difference in chi-squares is 13.33 with three de­ grees of freedom, which is significant at the .5% level. Hence, the normality of (li.ls) is rejected. To test the normality of (8 ,,8*,83,84) we relax the constraints on 0-„0,^,9,,, 67r . 7. © s . ’©K hi Equation 36 and rcestimatc the model with free. This gives x ' = 55.00 with 41 degrees of freedom. T he difference in chi-squares is 25.76 with eight degrees of freedom , which is also significant at the .5% level. Hence, the normality assumption o f (81,82.83,84) is also rejected. I he chi-square of 55.00 with 41 degrees o f freedom has a p value of 0.071. RMSEA is 0.037 with a 90% confidence interval from 0.00 to 0.06. This and other fit statistics given in the output suggests that the model represents a reasonable approxim ation to the population. In the model where both normality assumptions arc relaxed, the struc­ tural Equation 37 is estimated as a t = 0.09*be + 0.90*lv + 0.06*belv + z (0.C8)

(0.12)

(0.10)

1.12

7 33

0.61

Hcnce, neither be nor belv arc significant in this equation. Much larger samples are needed to establish the existence of an interaction effect.

TABLE 3.2 Bagozzi’s BEA Data (A' = 253) Mode) 1: O n e Product Variable P(tramel?r A

Y.a *(Yi ) l-vatue

A

*(7>) t-vahie

A

K •'(7a) t-vakie A

v , •(Yi ) l-vahie

t-vahu s( 7m)

l-vfihie

l-vfthie R2

X2 * T E < B .B )-» L Y (3 .0 * P S (l ,1 > *T E (S ,S ) Y E < 9 . 8 ) - L Y < 4 14 } * L Y ( 5 , 4 ) * T E < 3 , 3 > * L Y < 6 , 2 ) * P S ( 2 , 2 > » T E < 3 . 3 ) T E ( 9 . 9 ) = L Y ( 3 , 4 ) * » 2 * T E < 5 , S ) * L Y ( S , 4 ) * * 2 « T £ ( 3 , 3 ) * L Y ( 3 , 1 ) * * 2 * P S ( 1 , J ) • T E ( 5 , B ) + L Y ( 5 , 2 > * * 2 * P S ( 2 , 2 ) * T K < 3 , 3 ) * Y E ( 3 , 3 ) ^TKCS,S> C 1 < 3 )-P S < 2 ,1 ) L Y (6 .4)® L Y (2,4)»L Y C 4,4 ) L Y < 7 ,4 )» L Y (2 ,4 )* U (S ,4 ) L Y < 8 ,4 )= 1 .Y (3 ,4 > L Y < 4 .4 ) I.Y < 9 ,4 )« L Y < 3 ,4 )* L Y (S ,4 ) . 5 ALL 0 L Y < 2 .4 ) LY < 3,4> L Y ( 4 , 4 ) L Y < S ,4 ) 1 L Y (1 ,4 )

Oil SO IS RS AO-OFF IK VP DF--9

ACKNOWLEDGMENT The research reported in this chapter has been supported by the Swedish Council for Research in the Humanities and Social Sciences (HSFR) u nd er die program Multivariate Statistical Analysis.

REFERENCES Ajzen. I., & Fishbein, M. (1980). Understanding altitudes andpredidingsocird kehtivior. Englewood Cliffs, NJ: Prentice Hall. Ameiniya, Y., & Anderson, T. \V. (1990). Asvmptoiic chi-square tests for a large class o f factor analysis models. Thu Annals o f Statistics, J, 1453-1463. Anderson, T. W\ (1984). An introduction to multivariate sUttislical analysis (2nd ed.). New York: Wiley. Anderson, T. W., & Amemiya, Y. (1988). The asymptotic normal distribution of estimators in factor analysis under general conditions. The Annals of Statistics, 16, 759-771. Bagozzi, R. P., Baumgartner, H., 8c Yi, Y. (1992). Slate versus action oriental ion and the theory of reasoned action: An application to coupon usage. Journal of Consumer Research, IS, 505-518. Browne, M. VV. (1984). Asymptotically distribution-free methods for the analysis o f covariance structures. British Journal of Mathematical and Statistical Psychology, 57, 62-83.

88

JORESKOG AND VANG

Browne, M. W. (1987). Robustness o f statistical inference in factor analysis an d related models. Biomr.trika, 74, 375-384. Fishbeiu, M., Sc Ajzen, 1. (1975). Belief, attitude, intention find behavior: A n introduction to research. Reading, MA: Addison-Wesley. Graybill, F. A. (I9 6 0 ). Introduction to matrices with applications in statistics. Belm ont, CA: Wadsworth. Joreskog, K. G. (1993). Testing structural equation m odels. In K. A. Bollcn & J. S. Long (Eds.), Testing structural equation models. Beverly Hills, CA: Sage. Joreskog, K. G., 8c Sorbom, D. (1989). LISREL 7—A guide to the program an d applications (2nd ed .). Chicago: SPSS Publications. Joreskog. K. G., 8c Sorbom , D. (1993a). New features in P R E LIS 2. Chicago: Scientific Software. Joreskog, K. G., & Sorbom , D. (1993b). New features in LISREL 8. Chicago: Scientific Software. Joreskog, K. G„ & Sorbom, D. (1994). Simulation with P R E LIS 2 and L/SREL 8. Chicago: Scientific Software. Kenny, D. A., & Ju d d , C. M. (1984). Estim ating th e n o nlinear an d interactive effects o f latent variables. Psychological Bulletin, 96, 201-210. Koning, R., N eudecker, H., & W'ansbeek, T. (1993). Im posed quasi-normality in covariance structure analysis. In K. H aagen, D. Baitholoinew , 8c M. Deistier (Eds.), Statistical modelling and latent variables (pp. 191-202). Amsterdam: Elsevier Science. Satorra, A. (1989). Alternative test criteria in covariance structure analysis: A unified approach. Psychometriha, 54, 131—151. Satorra, A. (1992). Asymptotic robust inferences in th e analysis o f m ean and covariance structures. In P. V. M arsden (Ed.), Sociologicalmethodolngy 1992 (pp. 249-278). Cam bridge, MA: Basil Blackwell. Yang, F. (in preparation). Non-linear structural equation models: Simulation studies on the KennyJudd model

CHAPTER

FOUR

Multilevel Models From a Multiple Group Structural Equation Perspective J o h n J. M cA rdle F u in iak i H a rn a g a m i University of Virginia

W henever a prom ising new approach to data analysis becom es available many researchers are keen to try it out. In this chapter, we describe som e of o u r attem pts to use the new statistical models term ed random coefficient or hierarchical or multilevel structural models (after Mason, W ong, & F.ntwisle, 1984). O f course, multilevel m odels are n o t really new m odels because they have been available for over two decades (e.g., Aitkin & Longford, 1986; Hartley & Rao, 1967; Harville, 1977; Jc n n ric h 8c Sam pson, 1976; Lindley & Smith, 1972; Novick, Jackson, Thayer, & Cole, 1972). However, several recent reviews show a surge o f interest in multilevel m odels (see Bock, 1989; Bryk & R audenbush, 1993; G oldstein, 1987; Hoc 8c Ki eft, 1994; Long­ ford, 1993; M uthen & Satorra, 1989; R audenbush, 1988). T hese statistical m odels have becom e popular for a very good reason— multilevel m odels seem to provide a reasonable way to deal with th e com plex statistical and m athem atical problem s o f “nested” o r “clustered” data collection designs. Much research on the multilevel m odel (ML) has focused on statistical ancl algorithm ic issues in the efficient calculation of the fixed regression param eters and the random com ponents in these models. D etailed treat­ m ents are presented by Bock (1989), Bryk and R audenbush (1993), Long­ ford (1993), and Kreft, Del.eeuw, and van de L eedcn (1994). Most agree that D em pster, Laird, an d Rubin (1977) provided a break th ro u g h in n u ­ merical efficiency for m axim um likelihood estim ation (MLE) o f variance com ponents with the FM algorithm s. This m ethod was used to estim ate covariance structures for both “balanced” designs with equal num bers o f 89

90

MrARDLE AND HAMAGAMI

observations p er unit, and for “unbalanced” designs d ue to incom plete values (Dem pster, Rubin, Sc Tsutakawa, 1981; Little & Rubin, 1987). In curren t work, o th e r techniques an d algorithm s are being used and M IJ. o f multilevel m odels with several levels o f analysis can be fitted using several widely available com puter program s (e.g., GEN MOD by W ong 8c Mason, 1985; BMDP-5V by Je n n rich & Schluchter, 1986; VARCL by Longford, 1987; Empirical Bayes m odels in Bock, 1989; MI,3 by Prosser, Rabash, Sc. Goldstein, 1991; HEM by Bryk & R audenbush, 1993). D etailed com parisons o f these program s arc provided by Krcft, dc Lceuw, an d Kim (1990) and Krefl, d e Leeuw, and van de L eeden (1994). This ch ap ter is n o t about these advances in co m p u ter program s but about the conceptual m odels underlying these developm ents. We explore som e relationships between the “new" multilevel structural m odels (MLSEM) and the “o ld ” m ultiple groups structural equation models (MG-SEM). T hese questions were stim ulated by work d o n e by G oldstein an d M cDonald (1988), M cDonald and G oldstein (1989), M uthen and Satorra (1989), an d extended by McDonald (1994) and M uthen (1994). Wc extend this work in several different directions— we give a simple dem onstration of the relationships between standard MG-SEMs to two basic m odels of multilevel analysis: (a) the variance com ponents (ML-VC) m odel, an d (b) the random coefficients (ML-RC) m odel. Wc do not elaborate o n statistical o r technical features, but we do discuss som e o f the available co m p u ter program s. T o fu rth er explore these issues, we present three applications of ML-SEM models on individual differences in hum an cognitive abilities. Each o f these applications starts out with a substantive question that includes a nested data set and a m odel with some form o f unobserved o r latent variable. In som e cases, these ML models will be considered as MG-SEM with large num bers o f groups an d additional param eter constraints. In oth er cases, these multilevel models will be considered as structural m odels for raw data with large am ounts o f incom plete scores (after I .ange, Westlake, Sc Spence, 1976; N eale, 1993; McArdlc, 1994). As it turns out, o u r ML-SEM representation does n o t lead to eith er im proved statistical o r algorithm ic efficiency. But our work does show how to: (a) d efine the basic structural equation assum ptions in ML models, (b) draw an acmrate path-graphic display o f ML m odels, and (c) add new unobserved or latent variables in ML models.

MULTILEVEL VARIANCE COM PONENTS (ML-VC) MODELS Many data sets can be term ed multilevel (ML) because they have some easy to identify levels o f aggregation. T h e term hierarchical is an appropriate way to describe such data (but this should not be confuscd with hierarchically

91

4. MULTILEVEL SEM MODELS

ordered set of regression equations). For example, wc might be mainly interested in the behavior of students, and we might recognize that each student is a m em ber of a specific classroom. T o fully investigate the stu­ d e n t’s behaviors we might need to know about the overall behaviors of the classroom as well. We might even climb further up the scale of aggregation and recognize that each classroom also exists in a particular school, and each school is located in a ccrtain district, and so on. W hen it is not possible for the same student to be in more than one class, and for the classroom to be in more than one school, we usually add the im portant statistical feature—the students are said to be “nested” within classrooms, an d the classrooms arc “nested” within schools (Aitkin & Longford, 1986; Cronbach, Gleser, Nanda, & Rajaratnam, 1972; Kreft, 1993). In the models to follow, we write the outcome score Yn for the wth individual (of Ng) within the gth group (of G). These scorcs and groups arc defined in each application, and we usually add general independence assumptions—the individual responses are independently sampled and the groups do not include any overlapping responses. In m ore advanced mod­ els, variables are measured or considered at higher levels of aggregation, and more complcx forms of overlapping levels are possible. In this chapter we consider only the the simplest possible models for the analysis of such multilevel data. Standard ANOVA as a Single Group Structural Equation Model (SEM) A first multilevel model can he based on a comparison o f means across groups. O ne standard approach to com paring group differences in means comes from the linear regression formulation of the well-known analysis of variance (ANOVA) model (Cohen & Cohen, 1985). H ere we write j

Y = Y A * D +E , jr=•>

(1)

where the C regression coefficients Ae are slopes associated with individual predictor variables D'r As usually conceived, these Dgarc categorical variables representing different contrasts am ong the G groups. For example, if the Dg are represented using “dummy codes,” then an estimated / \ intercept can be directly interpreted as the mean of the control group and the A, in terms of mean differences from the control group. Alternatively, if the Dg are defined as “effect codes,” then the estimated /lo intercept can be directly interpreted as the grand mean of all groups and the A, as m ean differences from the grand mean. The F. is an unobserved residual erro r term , and we

92

McARDLE AND HAMAGAMl

usually assume that these scores are are normally distributed with m ean zero and constant variance [Eg ~ N(0, V') 1. O nce the D,. are explicitly staled, we can obtain expectations an d in terp ret the param eters As for each group C>. A structural path diagram o f this standard ANOVA m odel is given in Fig. 4.1a. Observed variables (F) are represented as squares and u n o b ­ served variables (E ) are represented as circles. A triangle is used h ere to d enote a constant o r assigned variable (as suggested by McArdle & Boker, 1991). In this case, we use triangles to define all f) group coded variables. This is a nontrivial graphic dcvicc only bccause il clarifies som e im p o rtan t distinctions, which need to be m ade in m ore com plex models. In each model here, the m eans, variances, and covariances o f the l)F variables are a direct function o f the coding schcm c used. ANOVA as a Multiple Group Structural Equation Model (MG-SEM) T he previous m odel can also he organized using the advanced techniques of multiple group (MG) SEMs (e.g., LISREL; Jbreskog & Sorbom, 1979,1993). In one approach we write a specific m odel for each o f th e G groups as K„(1> = A / 11+ £„"> ,

(2 )

Y m = jvff) + E V)f or generally, Y?;« = M l' + F»tg) *

w here, for the g-th group, the regression coefficient Af«-’ is th e group m ean, and the Ee is an unobserved residual o r erro r term . As before, we assume that these residual scores are norm ally distributed with m ean zero and constant variance within groups [jfcW ~ N(fi, ) J. T he first step in a m ultiple group m odel for m eans an d covariance is given in Fig. 4.1b. H ere each separate group includes a single observed variable (square), a single erro r term (circle), an d a single constant (tri­ angle). U sing widely available com puter program s (e.g., LISREL), we can now set any o r all coefficients to be invariant over groups, vve can estimate the param eter values from the sum m ary statistics [using MLEJ, an d wc can exam ine th e goodness o f fit of these restrictions [using x 2]- For ex­ am ple, if set all of th e e rro r variances [ V?'-'*] to be equal then we can lesi the hypothesis o f “hom ogeneity o f variance." In addition, we can set all of the m eans [ Af>’:] equal lo test for the usual one-way ANOVA om nibus hypothesis o f “equality o f means." Specific hypotheses about equality of (G -K ) specific group m eans an d variances can be tested by fitting separate models with different constraints across groups. From this MG-SF.M perspective, we can also m ore closely mimic the ANOVA m odel estim ation for a particular coding schem e o f interest. This can be done by writing a specific model for each for the G groups as

IOQT4

afe™ Girup2

um

y 11- w i

% IB)

Mi °I

(a) T ie Standard ANOVA Model

(b) Group Means and Covariances in MG-SEM

C ortrM l V « ftiU u

GroupQ

(c) Prespecified MG-SEM Contrasts

Y*

(d) Multilevel ANOVA in MG-SEM

MG. 4.1. Alternative path diagrams o f the standard ANOVA model for (1 independent groups. *

VJ'> = A + A4), these multilevel m odels give a direct estimate of the vari­ ance V„ of these m ean differences. In this standard multilevel m odel for the g-th group, we write the linear m odel and expectations [£| as

95

4. MULTILEVEL SEM MODELS

K g -- A

where we usually assume that both scores A^and Egare norm ally distributed with m ean zero an d constant variances V„ and Vr (see Bock, 1989) an d with no correlation am ong these group-level scores. The typical calculation of the “intraclass correlation” is now a direct function o f the variance between groups V„ and within groups Vr [/{,, = V J{V a + V,)J (see Singer, 1987). Using the MG-SF.M notation set out earlier, we can rewrite this m odel for group g as V*» - A + .Sti * + S/ * eW'*>, with C C ^ ^ = 0 a « ^ ( ' (|!- l g .i

(6)

g= i

c

c d f) = 0 and £ d f} = 1 r 1 *=1 so the a (fi) are m ean deviations standardized over all groups, and so the coefficient S„ represents the standard deviation of the m ean differences overall groups | S„ = VV„ ]. F urther constraints o n the d*> term s are required to make their distribution be symmetric (e.g., no skew) or norm al, but these constraints will not be discussed fu rth e r here. After including the standardization and in d ep en d en ce constraints, these structural equations lead to the appropriate multilevel expectations for each group. T o dem onstrate this result here, we can expand the m odel expectations (£) for the m eans o f any group g as = C [ jt\\W ] - £(M + Slt * + S, * TI ]') = £(A) + S„*£(dt>) + S' *£() = A + Sa * 0 + Sf *Q = A and for the variances o f any group g as V= ay«> - M,y«> - A f ] - ■£([A + Sn * a# -i- S, * ) + £(A * S, * tfv) +£(Stl *

* ^4) + £(S, * d ''1)

+ £(.S0 * d--1* S, * ) + £(.s; * (>«' * A) + £(.V *

* Sn * d K') f £($. * ^ ' )

= 0 + 0 + 0 + 0 + ^ ^-0 + 0 + 0 + Si = K + vr

(8)

96

Mi ARDl.F. AND HAMAGAMt

T hese MG-SEM equations also allow us lo draw a sim ple hut accurate path diagram for an ML-ANOVA, shown here as Fig. 4 .Id. T h e path diagram is unusual in several ways. First, it allows the tracing rules o f path diagrams to provide the corrcct expectations for the group m eans an d variances ju st given. Unfortunately, as drawn here, this m odel does n o t clearly show all the standardization (and possibly norm alization) constraints req u ired for the identification of th e between groups standard deviation S„ term . These kinds o f constraints can, in fact, be placed in a path diagram using addi­ tional nodes (as in H orn & McArdlc, 1980), b u t this is n eith er efficient, no r essential to the rest o f ou r discussion. Needless to say, these constraints m ust be assum ed o r the multilevel m odel will not be properly fitted an d represented in the expectations.

MULTILEVEL RANDOM CO EFFICIENT (MIvRC) MODELS M ore complex ML m odels are used to u nderstan d variation in th e outcom e variables [F„ J in term s o f in d e p en d e n t p redicto r variables [ A’.J m easured on every individual w ithin a group. In som e o f these m odels we will go fu rth e r and try to u nderstand group variation as a function o f a set of group level predictor variables [ZK] (having only one value for all m em bers of group £•). T h e sim plest cases o f the kind o f multilevel m odels will be considered in this section. S tandard ANCOVA G roup D ifference M odels A nother im portant multilevel m odel is based on a com parison o f regression coefficients across groups (i.e., random coefficient models). O n e standard approach to com paring group differences in equations com es from the well-known analysis of covariance (ANCOVA) m odel (C o h en & C ohen, 1985). In linear regression notation, we write Y„_g = A + B *

+ (I* Zg+ P * (X„„ * ZJ + E„_g ,

(9)

w here the regression coefficient A is the intercept term (the expected score on F when all predictors are zero), tt is the slope for individual predictor X (the change in Y for a one-unit changc in X h o ld in g Z con­ stant), Q is the slope for group predictor Z (th e change in the intercept for }' for a one-unit change in Z), an d P is the slope for product variable X * Z (the change in the X slope for Y for a one-unit change in X * Z). T h e E is an unobserved residual term with the usual assum ptions [C ,V (0 , V ,) ]. This standard ANCOVA model is draw n as a path diagram in Fig. 4.2a. From this m odel we can obtain expectations about the structure

(c) StructuraJ Contrasts in MG SEM {Gth Group)

(d) Multilevel Random Coefficients in MG-SEM (Gth Group)

FIG. 4.2. Alternative path diagrams of the ANOVA and variance com po­ nents models for C independent groups.

of the means, variances, and covarianccs of our observed variables Y and X for each group G. There are some advantages in writing this model as a MG-SEM (as in Sorbom, 1974, 1978). In our current notation, this can be done by writing y to = a ** + &*> * Xj*} + Ev[g\

(10)

leading to group expectations for the mean and variance of M /* = €(Y *) = Ato + B*>*MxW and VyW = £([ W - My] [ Y # - MyX) = W * Vx* B # +

(11)

where the expectations display the unstated independence assumptions. This multiple group regression model is now redrawn as a path diagram in Fig. 4.2b. Using this MG-SEM, it is easy to see bow Aij& intercepts or slope parameters can be constrained to form constraints represented hy

98

M = Vr], but this is a testable assum ption in MG-SEM. It is also very easy to see how partial constraints may be placed on some group s an d not others. .Although possible, these constraints are m ore difficult to represent in the standard ANCOVA m odel. In o rd er to express som e typical m ultilevel m odel constraints in this notation, it is useful to rewrite the previous m ultiple g roup regression m odel in term s o f m ean deviations Y U - A ++ (B + &*>) * x y + S

CPACOKE C M /I 'S A T G P A C O I U C O ttAI) -> A C I'S A T C A C l’S A T -> AC/I'SAT Vttriunces a n d Covirritmces C Variance (Intercept) Vari an ce v and Yipl in both reading and mathematics. U nder the individual growth models in Equations 1 and 2, these records can be represented conveniently as: w fcl> + jrtO P0) l*.

"i = i k i h n'P r >p

(3)

and: -jW Um ) p

=

r 1 1

rr- r'> 'Hi p

*'i *

'e l? ' +

Tt1” * i

n ?

k

p ( K l)

(4)

p(O T)

y p _

For reasons of pedagogy an d parsimony, in Equations 3 and 4 and through­ out the rest of the chapter, we have retained symbols l\ through % to represent the timing of the occasions o f m easurem ent. In any particular research project, of course, each o f these symbols will have known constant value. In our data example, because of our recentering and logarithmic transform ation of the lime metric, /i through ft take on values -0.693,1.504, and 2.251, respectively. For the purposes of subsequent analysis, we must com bine the separate Level 1 growth models of Equations 1 and 2 into a single composite “crossdom ain” model that represents simultaneous individual change in both reading and mathematics achievement, as follows: n?

K

1

A

1

h h

0

a

0

0

0

1

0

0

1

0

0

1

1 0

Vt») hp j

0

pto fc|/> £)

0

k

"B
) 0 ■V ~ N C*( W) 0 EIl> 0 0 p {m ) \

»

oS* 0 0 0 0 0

L

o o3« 0 0 0 0

0 0 (7*0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

\

(6)

/

w here the m ean vector and covariance m atrix o n the right-hand side o f Equation 6 are assumed identical across children. Notice th at although we are assum ing initially that the Level 1 m easurem ent erro rs arc homosccdastic within dom ain, we are not assuming that they arc hom oscedastic across dom ains, because there is no a priori reason to believe that m easurem ent erro r variance will be identical in both reading an d m athem atics. If there

136

WILLEI I AND SAYKR

were, then this constraint on the Level 1 m easurem ent e rro r variance could easily be tested and im posed. T h e covariance structure approach that we describe perm its great flexi­ bility in the m odeling o f the Level 1 m easurem ent e rro r covariance struc­ ture and this flexibility is a m ajor advantage for the m ethod. We are not restricted only to “classical assum ptions"—we can, in fact, perm it each person lo draw th e ir m easurem ent e rro r vector at random from a distri­ bution with m ean vector zero an d an unknow n covariance m atrix whose shape can be specified as necessary. This flexibility perm its us to test the fit of the classical e rro r structure here against other, m ore liberal, hypothe­ ses and we can modify the Level 1 e rro r covariance stru ctu re as necessary. A nd regardless o f th e final structure adopted, we can estim ate all o f the com ponents o f the hypothesized Level 1 erro r covariance m atrix.5 This facility is im p o rtan t in a study o f individual change because knowledge o f the m agnitudes o f th e Level 1 e rro r variances an d covariances underpins the estim ation o f m easurem ent reliability an d m easurem ent erro r auto­ correlation. Modeling Interindividual Differences in Change liven though all population m em bers are assumed to sh are a com m on functional form for th e ir changes in each dom ain, the tru e growth trajec­ tories may still differ across people within dom ain because o f interindi­ vidual v a ria tio n in th e values o f th e in d iv id u al g ro w th p ara m ete rs. F urtherm ore, the individual changes may be linked across dom ains because of covariation am ong the individual growth param eters from dom ain to dom ain. Thus, w hen we con d u ct cross-domain analyses o f change, we nec­ essarily express an interest in th e population betw een-pcrson distribution o f the vector o f individual growth param eters. In o u r data exam ple, for instance, we specify th a t everyone in th e popu­ lation draws their la ten t growth vector indepen d en tly from a multivariate norm al distribution o f the following form: Q

f X *'

\ °> x I

~ N K ' M l? V

° o c

°cv ;

o o

° \> r ~

*1 ”o a k"

CT< v ,"

CT,.-!,,"" CT*

/

This hypothesized distribution is a Level 2 betw een-person m odel for interindividual differences in true change. In the m odel, th e re are 14 ''Provided th e hypothesized covariance structure m odel is identified.

5. GROWTH MODELING AND COVARIANCE STRUCTURE ANALYSIS

137

im portant bctw een-person param eters: the 4 population m eans, 4 variances, and 6 covariances of' the latent grow th vector. T hese param eters provide inform ation on the average trajectory o f true change within dom ain, the variation an d covariation o f true intercept an d slope w ithin dom ain, an d the covariation o f true in tercep t and slope between dom ains, thereby answering the research questions cited earlier in this section. All o f these Level 2 param eters can be estim ated using the covariance stru ctu re ap proach that we describe next.

ADOPTING A COVARIANCE STRUCTURE PERSPECTIVE In T able 5.2, we present th e sam ple m ean vectors an d covariance m atrices for the variables th a t were in troduced in Table 5.1, estim ated using data on all children in the illustrative data set. W hat kinds o f statem ents do these statistics readily support? Focus, first, on the statistics th at describe the three waves o f observed reading achievem ent. E xam ining the wavc-bywave m eans {the first three entries in the left-hand p art o f the sam ple mean vector) we see that, on average, observed average national ran k in reading tends to decline slightly for healthy children, increase slightly for astham tic children, an d p lum m et for children with seizures over the school carccr. T h e m agnitudes o f the variances in the leading diagonals o f the covariance matrices for reading achievem ent (the [3 x 3] subm atriccs in th e u p p er left-hand co rn e r o f the sam ple covariance matrices) suggest that, for all health status groups, observed reading achievem ent becom es generally less variable over tim e as adolescents’ national rank converges with age. Inspec­ tion of the be tween-wave covariances am ong the reading scores (again in the |3 x 3] reading covariance subm atrices) suggests a generally positive association am ong observed reading scores over the three occasions o f m easurem ent but contribute little to o u r u n d erstan d in g o f change in read­ ing achievem ent over tim e. Similar statem ents can be m ade ab o u t the sam­ ple m eans, variances, a n d covariances o f th e m athem atics achievem ent scores. Finally, inspection of the subm atrices o f covariances am ong the three waves o f reading scores an d the th ree waves o f m athem atics score within cach health status group (the [3 x 3] subm atrices in the lower left corners o f the sam ple covariance m atrices), suggest that observed reading and m athem atics scores are positively associated on each o f th e occasions o f m easurem ent but this does n o t allow us to craft statem ents ab o u t po­ tential relationships between changes in the two dom ains. So, even ignoring the distinction between observed an d true scores, it is n o t easy to reach inform ed conclusions abo u t interindividual differences in change by inspecting between-wavc sum m ary statistics (Rogosa el al., 1982; Rogosa & Willett, 1985; Willett, 1988). Between-wave statistics do not provide

138

WILLETT AND SAYF.R TABLE 5.2 Estimated M eans and Covariances Tor T h ree Waves o f R eading and M athem atics A chievem ent Scores at Ages 7, 1 1, and 1C for (a) healthy ch ild ren (n = 5 1 4 ), (l>) child ren with chronic asthm a (n = 137), and (c) child ren with ch ro n ic seizures (n = 72) Rending

HmtLh Status H ealthy

Asthma

Seizures

Summary Statistic Means Covarianccs

M eans Covariances

M eans C ovariances

Age 7 56.97 972.18 583.33 519.94 431.75 542.41 448.83 55.12 986.71 548.25 520.82 481.10 566.74 463.27 53.05 1155.83 559.29 573.27 535.72 608.91 478.83

.MttthfmtUir*

Age I I

Agr 16

Agp 7

Age II

Age 16

54.28

53.78

56.72

54.87

55.01

823.82 683.15 369.76 564.72 504.99

824.68 371.11 574.97 549.05

773.68 430.86 380.74

775.30 563.86

764.40

56.47

57.15

58.58

54.08

53.63

802.96 657.57 372.74 567.08 491.04

849.52 381.45 531.00 546.13

800.32 442.87 349.41

788.15 589.60

819.31

43.81

44.06

49.89

45.38

16.72

943.51 299.72 541.50 555 .56

809.11 421.66 358.34

773.57 559.24

743.41

819.39 709.01 287.82 580.76 460.77

a “view" th a t su p p o rts easy inference a b o u t differen ces in individual change. T o answ er questions a b o u t changc, o n e m ust a d o p t a perspective that em phasizes change. R ather th an sum m arizing d ata as beiween-\vave vari­ an ces an d covariances, o n e m u st use individual grow th trajectories. For instance, in bo th do m ain s, it is easier to see from Fig. 5.1 th a t observed chan g e can be e ith e r positive o r negative, th a t individuals are converging over tim e, an d chat th e re is hetero g en eity in level an d rate o f c h a n g e across people. T h e d ata a re identical in b o th cases, bill ihe view offered by the sum m ary statistics differs— each view su p p o rtin g a qualitatively d ifferen t kind of in te rp re tatio n . D oes this m ean th a t we ca n n o l recover in fo rm atio n ab o u t ch an g e oncc data have b een collapsed in to between-wave m ean s an d covariances? No, it d oes not. Wc m u st sim ply “m atch u p ” th e between-wave an d change perspectives explicitly. If we could, for instance, figure o u t th e between-wave im plications o f th e individual grow th m odelin g perspective ad o p ted in

3. GROWTH MODELING AND COVARIANCE STRIICTIRE .ANALYSIS

139

Equations I through 7, we could check w hether they com pared favorably with the data sum m aries in T able 5.2. For instance, in E quations 6 and 7, wc have proposed what wc believe arc reasonable m odels for interindividual variation in the individual growth param eters an d erro rs o f m easurem ent. If we are correct, th en these m odels m ust underw rite the belween-wave m ean and covariance structure evident in Tabic 5.2. In o th e r words, al­ though we are dealing wilh two different perspectives on the problem —a “between-wave” perspective in T able 5.2 an d a “grow th” perspective in Equations 1 through 7—the between-wave covariance structure im plied by the growth m odels m ust resem ble the between-wave covariance structure observed in ou r data if ou r param eterization o f change is correct. Fortunately, well-developed m ethods are available for testing o u r suspi­ cions— the m ethods o f covariance stru ctu re analysis. Starting with ihe sam­ ple m ean vcctor and covariance m atrix in Table 5.2 as “in p u t,” we can claim that o u r hypothesized growth m odels fit when, having estim ated the param eters o f E quations 6 and 7, we can accurately predict the betweenwave covariance stru ctu re of th e observed data. As M eredith, Tisak, McArdlc and M uthen pointed out, the growth form ulation that we have posited—th e Level 1 m odels o f Equations 1 th ro u g h 6 an d the Level 2 model o f E quation 7— falls naturally into the fram ework offered by the LISREL model with m ean structures (Joreskog & Sorbom , 1989). Thus, ML, estimates of the im portant param eters in E quations 6 and 7 can be obtained by covariance structure analysis, as we now dem onstrate. Rewriting the Composite Cross-Domain Individual Growth Model as the LISRF.I. Measurement Model for Y W hen covariance structure analysis is used to conduct cross-domain analy­ ses of change over time, the hypothesized com posite cross-domain indi­ vidual growth m odel in E quation 5 plays the role of the LISREL m easure­ m ent m odel for the vector o f endogenous variables Y. For instance, in o u r illustrative exam ple, the com bined em pirical growth record of the />h child in both reading and m athem atics achievem ent can be written as: W') 1 V

-> K! yw '#> ■ip n?

0 0 0 0 0 0

1 1 1 + 0 0 0

•ip n pi1")

n0f>

„(«) IP h

(8)

p(n) of*) H p

which has the form at o f the LISREL m easurem ent m odel for endogenous variables Y:

140

WILLE.1T AND SAYER

Y= x + A ^ + e

(9 )

with LISREL sc ore vectors th a t contain the com bined em pirical growth rccord. the fo u r individual grow th param eters, an d the six errors o f meas­ urem ent, respectively:

r

r ei

1

y)

*!>

-I’

Y=

H? ,*i =

,e

Hf> -

p(iw)

( 10)

p(iw)

n?

cl."*)

and, unlike the usual practice of covariance structure analysis, the elem ents of the LISREL xy an d param eter matrices arc entirely constrained to contain onlv known values an d constants: l 1 1 0 0 0

"o' 0 0 0 >\ 0 0

c'’'5, and from the com posite cross-domain Level 1 growth m odel in to the LISREL endogenous construct vector n, which we have then referred to as the latent growth vector. In o th e r words, ou r fully constrained specification o f A, has forced the rj-vector to contain the very individual-level param eters whose I .evel 2 distribution must becom c the focus o f o u r subsequent between-person analyses. T hese required Level 2 analyses are conducted in the “structural” p art o f the general LISREL m odel— it is this part o f th e LISREL m odel that perm its the distribution o f the n-vector to be m odeled explicitly in term s of selected population m eans, variances, and covariances. And, of coursc, the particular population means, variances, an d covariances that we select as param eters o f the structural m odel are those that we have hypothesized are the im portant param eters in the jo in t distribution o f the laient growth vector in E quation 7. All that is required is to rewrite the latent growth vector as follows: 7t^ ' Lo» Jt< \>ym >>

-p(») 0 0 0 0

0 0 0 0

0 0 0 0

0 0 0 0

'/>

7tIp n(m)

(13)

7l(w l/')

\.yfc\)< y\>y(h)< > J. etc. As a random , tim e-dependent function, y(t) has first-order m ean an d sec­ ond-order covariance functions. If the m ean function is a constant, £[}(

HQ L Y( 6 / 2 ) L Y ( 3 4 , 6 ) EQ L Y( 7 , 1 )

LY( 3 5 . 5 )

EQ L Y ( 8 / 3 > L Y ( 3 6 , 7 ) EQ L Y ( 9 , 1 ) L Y( 3 7 , 5 ) EQ LY ( 1 0 , 2 ) LY ( 3 8 , 6 ) EQ L Y( 1 1 , 1 )

LY( 3 9 , 5 )

EQ LY ( 1 2 , 1 )

LY( 4 0 , 5 )

EQ L Y ( 1 3 , 4 ) L Y( 4 1 , B ) EQ L Y ( 1 4 , 3 )

LY( 4 2 , 7 )

EQ L Y ( 1 5 / 2 )

LY(4 3 . 6 )

EQ L Y ( 1 6 , 1 )

LY( 4 4 , 5 )

6. STRUCTURAL TIMF. SERIES MODELS EQ L Y ( 1 ? , 4 )

LY( 4 5 , 8 )

EQ LY ( 1 8 , 3 )

LY ( 4 6 , 7 )

EQ LY ( 1 9 , 1 )

LY ( 4 7 / 5 )

EQ LY ( 2 0 , 2

LY ( 4 8 /

)

6)

EQ l.Y ( 2 1 , 2 )

I.Y ( 4 9 / 6 )

EQ L Y ( 2 2 / 4 )

LY(50,8)

EQ LY ( 2 3 , 4 ) L Y ( 5 1 , 8 ) EQ LY ( 2 4 , 4 ) L Y < 5 2 , 8 ) EQ LY ( 2 5 , 3 ) L Y ( 5 3 , 7 ) EQ L Y ( 2 6 , 3 J L Y ( 5 4 , 6 ) EQ LY( 2 7 , 3 )

L Y ( 5 5 , 7)

EQ L Y( 2 8 , 4 ) L Y ( 5 6 , 8 ) EQ L Y < 1 , 6 ) L Y ( 2 9 / 1 0 ) EQ LY ( 2 , 5 ) L Y ( 3 0 , 9 ) EQ L Y ( 3 , 8 )

L Y( 3 1 , 1 2 )

EQ L Y ( 4 , 5 ) L Y ( 3 2 / 9 ) EQ LY ( 5 , 5 ) L Y( 3 3 , 9 ) EQ LY ( 6 , 6 ) L Y ( 3 4 , 1 0 ) EQ LY ( 7 , 5 ) L Y ( 3 5 , 9 ) EO L Y ( f t , 7 )

LY(3 6 , 1 1 )

EQ L Y( 9 , 5 )

L Y( 3 7 / 1 0 )

EQ LY ( 1 0 , 6 )

LY ( 3 8 , 1 0 )

EQ L Y ( 1 1 , 5 )

LY ( 3 9 , 9 )

EQ L Y ( 1 2 , 5 )

LY ( 4 0 , 9 )

EQ L Y ( 1 3 , 8 )

L Y ( 4 1 , 12)

EQ L Y ( 1 4 , 7 ) L Y( 4 2 , 1 J ) EQ L Y ( I S , 6 )

L Y( 4 3 / 1 0 )

EQ L Y ( 1 6 / 5 )

L Y( 4 4 , 9 )

EQ L Y ( 1 7 / 8 )

LY(4 5 , 1 2 )

EQ L Y ( 1 8 , 7 )

LY ( 4 6 , 1 1 )

FQ L Y ( 1 9 , 5 )

LY )

TE(17,17)

TE(18,1B)

TE(19,19)

FR T E ( 2 0 , 2 0 )

TE(21,21) TK(22,22)

TK(23,23) TH(24/24)

T E ( 2 5 , 25)

FR T E ( 2 6 , 2 6 )

TE(27,27) TE(28,28)

FR T E { 2 9 , 1 )

TE( 3 0 , 2 )

TE(31,3)

FR T E ( 3 6 , 8 )

TE(37,9)

TE(38,10)

TK(32,4>

TR(33,5)

TE(39,11)

TE(34,6)

TE{40,12)

TR(35,7)

TE(41,13)

FR T E ( 4 2 , 1 4 )

TE(43,15) TK(44,16)

TB(45,17)

TK(46,18)

TK{47,19)

KK T E ( 4 R , 2 0 )

TE(49,21) TE(50/ 22)

TE(51,23)

TE(52,24)

TE(53,25)

FR T B ( 5 4 , 2 6 )

TE(55,27) TE(56,28)

VA 1 . 0 0 L Y ( 1 / 2 )

LY(2 9 / 6 )

VA 1 . 0 0 L Y ( 2 , 3 )

L.Y( 3 0 , 5 )

VA 1 . 0 0 L Y ( 3 , 4 ) L Y ( 3 1 , 8 ) FK LY ( 4 , 1 )

l . Y( 3 2 , 5 )

FR L Y ( 5 , 1 )

LY(3 3 , 5)

FK LY ( 6 , 2 ) l.Y ( 3 4 , 6 ) FR LY( 7 , 1 ) I. Y( 3 5 , 5 ) VA 1 . 0 0 L Y ( 8 , 3 ) FR LY ( 9 , 1 )

LY( 3 6 , 7 )

LY( 3 7 , 5 )

FR L Y ( 1 0 . 2 )

LY( 3 8 , 6 )

FR L Y ( 1 1 , 1 )

LY( 3 9 / t> )

FR L Y ( 1 2 , 1 )

LY ( 4 0 , 5 )

FR LY ( 1 3 , 4 ) LY ( 4 1 , 8 ) FR r.Y ( 1 4 , 3 ) LY ( 4 2 , 7 ) FR L Y ( 1 5 , 2 )

LY( 4 3 , 6 )

FR L Y ( 1 6 , 1 )

LY( 4 4 , 5 )

FR L Y ( 1 7 / 4 )

LY(4 5 , 8 )

FR LY ( 1 8 / 3 ) I.Y ( 4 6 , 7 ) FR L Y ( 1 9 / 1 )

LY( 4 7 , 5 )

FR L Y ( 2 0 , 2 )

LY( 4 8 , 6 )

FR L Y ( 2 1 / 2 )

LY(4 9 , 6 )

FR L Y ( 2 2 , 4 )

LY(50,8)

FK i.Y ( 2 3 , 4 ) LY ( 5 1 / fl ) FR L Y ( 2 4 , 4 )

LY(52,8)

FR L Y ( 2 5 , 3 )

LY L Y ( 2 9 / 1 0 ) FK L Y( 2 , 5 )

L Y( 3 0 , 9 )

FR L Y ( 3 , 8 ) L Y ( 3 1 , 1 2 ) FR L Y ( 4 , 5 ) L Y ( 3 2 , 9 ) FK LY ( 5 / 5 ) l.Y ( 3 3 . 9 ) FR L Y ( 6 , 6 > L Y ( 3 4 , 1 0 ) FR L Y ( 7 , 5 ) L Y ( 3 5 , 9 ) FR L Y( 8 , 7 ) L Y ( 3 6 , 1 1 ) FR L Y ( 9 , 5 ) L Y ( 3 7 , 1 0 ) FR L Y( 1 0. . 6 )

LY( 3 8 , 1 0 )

FR L Y ( 1 1 , 5 )

LY( 3 9 / 9 )

190

HERSHBERGER, MOLENAAR, CORNEAL LY(40,9) LY ( 4 1 , 3 2 ) LY ( 4 2 , 1 1 ) LY ( 4 3 , TO) LY ( 4 4 / 9 ) LY( 4 5 / 1 2 ) LY( 4 6 / 1 1 ) LY( 4 7 , 9 ) L Y ( 4 8 ,10) LY ( 4 9 / 1 0 ) LY( 5 0 , 12) LY ( 5 1 / 1 2 ) L Y ( 5 2 , 12) LY( 5 3 / 1 1 ) LY(5 4 / 1 0 ) LY ( 5 5 , 1 1 ) LY( 5 t >. 1 2 ) LY(32,5) •Y( 3 3 , 5 ) LY ( 3 4 , 6 ) LY( 3 5 / 5 ) LY( 3 7 » 5 ) LY( 3 8 / 6 ) L Y( 3 9 / 5 ) LY( 4 0 , 5 ) LY ( 4 1 , 8 ) LY ( 4 2 / 7 ) LY ( 4 3 / 6 ) LY ( 4 4 , 5 ) L Y( 4 5 , f t ) LY ( 4 6. 7 ) LY ( 4 7 , 5 ) f, Y(4e, 6) LY ( 4 9 , 6 ) LY ( 5 0 , 8 ) LY ( 5 1 , 8 ) LY ( 5 2 , 8 ) LY(5 3 / 7 ) LY(5 4 , 6 ) LY ( 5 5 , 7 ) LY ( 5 6 , 8 ) L Y( 2 9 , 1 0 ) LY(3 0 / 9 ) LY( 3 1 / 1 2 ) LY(32,9) I.Y ( 3 3 / 9 ) 6

I.Y ( 3 4 / 1 0 )

6. STRUCTURAL TIME SERIES MODELS EQ LY

7.5)

LY< 3 5 , 9 )

EQ LY

8.7)

LY(3 6 , 1 1 )

EQ LY

9.5)

I . Y( 3 7 , 1 0 )

EQ LY

10.6)

EQ LY

11.5)

LY ( . 1 9 , 9 )

EQ LY

12.5)

LY( 4 0 , 9 )

LY(3 8 , 1 0 )

EQ LY

1 3 , fl ) L Y ( 4 1 , 1 2 )

EQ LY

14.7)

LY( 4 2 , 1 1 )

EQ LY

15.6)

LY( 4 3 , 1 0 )

EQ LY

16.5)

LY(4 4

EQ LY

1 7 , fl)

L Y( 4 5 , 1 2 ) LY( 4 6 , 1 1 )

,9 )

EQ LY

18.7)

EQ LY

19.5)

LY( 4 7 , 9 )

EQ LY

20.6)

LY( 4 8 , 1 0 )

EQ LY

21.6)

LY( 4 9 , 1 0 )

EQ LY

22.8)

LY( 5 0 , 1 2 )

EQ I jY 2 3 , 8 }

I. Y(51, 12 )

EQ LY

24.8)

LY( 5 2 , 1 2 )

EQ LY

25.7)

LY( 5 3 , 1 1 )

EQ JiY 2 6 . 6 )

Ii Y( 5 4 , 1 0 )

27.7)

i.Y{55,1\)

EQ l.Y EQ LY

28.8)

EQ TE

1.1)

TE{2 9 , 2 9 )

LY( 5 6 , 1 2 )

T£(30,30)

EQ T E

2.2)

KQ T F

3.3)

TK( 3 1 , 3 1 )

EQ TE

4.4)

TE(32,32)

EQ TE

5.5)

TE(3 3 , 3 3 )

EQ TE

6.6)

T£(34,34)

EQ T E

7.7)

TE(35,35)

HO TK

8.8)

TE(36,36)

EQ T E

9 . 9) TE( 37.37)

EQ TE

10.10)

KQ TK

11.11)

TE( 3 9 , 39)

EQ TE

12.12)

TE(40.40)

EQ TE

13.13)

TE(41,41)

EQ TE

14.14)

TE( 4 2 , 4 2 )

EQ TE

15.15)

TE(43,43)

EQ TE

16.16)

TE(44,44) TH( 4 5 , 4 5 )

TE(3tt,38)

EQ TE

17.17)

EQ TE

18.18)

TE(46,46)

EQ TE

19.19)

T E ( 4 7 , 47)

KQ TK

20.20)

T K ( 4 8 , 48)

EQ TE

21.21)

TE( 4 9 ,4 9 )

EQ TE

22.22)

T E ( 5 0 , 50)

EQ TE

23.23)

TE{5 1 ,5 1 )

EQ TE

24.24)

TE(52,52)

EQ TK

25.25)

TH(53,5.3)

192

HERSHBERGER, MOLENAAR, CORNEA!.

KQ T E ( 2 6 , 2 6 ) T K ( 5 4 , f54 } EQ T E ( 2 7 , 2 7 ) T E ( 5 5 , 5 5 ) EQ T E ( 2 B , 2 8 ) T E ( 5 6 . 5 6 ) ST

I

ALL

OU NS

APPENDIX D: LISREL PROGRAM FOR STATE SPACE MODEL: INVOLVEMENT AND AFFECTION S T A T E S P A C E MODEL:

INPUT

DA .VI -

MA = CM

7 0 NO -

MO NY = 7 0

NE =

108 10

I NVOLVEMENT F A C T O R ;

LY = F U , F I

TE = DI

VA 1 . 0

I.Y( 1 , 1 )

VA 1 . 0

L Y ( 3 6 , 6 ) LYgic.al Bulletin, 103, 391-410. M olenaar, P. C. M. (1985). A dynamic factor model for the analysis o f m ultivariate tim e series. P.syc.hometrika. 50. 181-202. M olenaar. P. C. M. (1989). Aspects o f dynamic factor analysis. In Annals oj statistical information (pp. 183-199). Tokyo: T he Institute o f Statistical Mathematics. MoJenaar, P. C. M., de Gooijer, J. G., Sc Schmitz, B. (1992). Dynamic factor analysis of nonstationary multivariate time series. Psychometrika, 57, 333-3/19. SAS Institute. (1990a). SAS guide to macro jrrocessing. 2nd edition. Cary, NC: A uthor. SAS Institute. (1990b). SAS/IMl. software. Cary, NC: A uthor. SAS Institute. (1993). SAS/RTS user's guide. Cary, NC: A uthor. Schmitz, R. (1990). Univariate and multivariate time-series models: T h e analysis o f intrain­ dividual variability and im erindividual relationships. In A. von F.ye (F.d.), Statistical methods m longitudinal research. Vol. IP Time series and categorical longitudinal dala (pp. 351-386). San Diego, CA: Academic Press. Schwar/., G. (1978). Estim ating the dim ension o f a m odel. Annals of Statistics, 6, 461-404. Watson, D., Clark, I.. A., Sc T d le g e n , A. (1988). Developm ent and validation o f brief measures o f positive and negative affect: T he PANAS scales. Journal of Personality and Social Psychology, .54, 1063-1070. Wood, P., 8c Brown, D. (1994). T h e study o f intraindtvidual differences by m eans o f dynamic factor models: Rationale, im plem entation, an d in terp retatio n . Psychologiccd Bulletin, 116, 166-186.

C H A P T E R

S E V E N

Bootstrapping Techniques in Analysis of Mean and Covariance Structures Yiu-Fai Y ung University of North Carolina, Chapel Hill P e te r ML B e n tle r University of California, Los Angeles

It has been more than 15 years since Efron’s (1979) m onum ental paper on bootstrap methods. This paper has had a major impact 011 the field o f statistics. In Kotz and Jo h n so n 's (1992) book Breakthroughs in Statistics, which selected, highlighted, and reprinted the most significant works in statistics since the 1980s, E fron’s (1979) paper is the latest one selected. Therefore, although Efron's paper is quite “young" relative to the history of statistics, its significance has already been well recognized. T h e beauty of the bootstrap, of course, lies in its relaxation o f severe distributional assumptions of param etric models (e.g., multivariate normality o f the popu­ lation distribution). Statistical inferences using the bootstrap are usually made by com putations, thus providing solutions to statistical problem s that would otherwise be intractable. The bootstrap has been applied to many areas in statistics, and a vast am ount of research on it has been published since E fron’s introduction. The bootstrap also has diffused into die field o f behavioral sciences, though al a much slower pace. In the psychological literature, there has been a heated debate about the usefulness of the boot­ strap applied to the correlation coefficient (cf. Efron, 1988; Lunneborg, 1985; Rasmussen, 1987, 1988; Strube, 1988). A recent application has been tied to coefficients in discrim inant analysis (Dalgleish, 1994). In the socio­ logical literature, lo the best of our knowledge, the earliest introduction of the bootstrap methodology into the field can be traced back to the work by Bollen and Stine (1988), Dietz, Frey, and Kalof (1987), and Stine (1989). Although not primarily written for social scientists, articles an d books also 195

196

YUNC AND Ui'.NTI.LR

exisi lhat introduce the basic bootstrap ideas in a way com prehensible to social scientists with m inimal formal statistical training (e.g., Efron & Gong, 1983; Efron & T ibshirani, 1986, 1993; for a recent critical review o f the bootstrap research, see Young, 1994). T he present chapter focuses on bootstrap applications to covariance structure analysis, of which exploratory and confirm atory factor analysis are considered leading cases. Because general introductions to bootstrap m eth­ ods already are available to behavioral scientists (e.g., Lam bert, Wildt, & D urand, 1991; Mooney & Duval, 1993; Stine, 1989), our introduction to the bootstrap is minimal. O f course, we present all necessary definitions and basic ideas so d ia t this chapter will be self-explanatory. O u r m ain em phasis is on abstracting relevant bootstrap m ethods in to a framework suitable to covariance structures, and on ju d g in g w hether the bootstrap “works" for the given situations. Thus, the present exposition is m ore evaluative than introductory. Following a critical review of the literature, an analysis using a real data set illustrates the validity o f the bootstrap m ethod for estim ating standard errors when th e norm ality assum ption may n o t be true. Some related issues in the application o f the bootstrap to covariance structure analysis are also discussed here. Next, by extending Bcran and Srivastava’s (1985; see also Bollen & Stine, 1993) m ethod of bootstrap testing, we suggest two additional applications of the bootstrap to m ean an d covariance struc­ tures. Finally, some concluding com m ents are given.

APPLICATION O F TH E BOOTSTRAP T O COVARIANCE STRUCTURE ANALYSIS Covariance S tructure Analysis Suppose the target population P has a (cumulative) distribution function 5 for a p X 1 vector of variables. A sam ple of size n is draw n from P according to the distribution J . D enote the p x 1 vector o f observed variables (random ) for the i-th individual as *s (i = I, 2, . . . . n). T h en a covariance structure m odel, here called the null m odel M,, (not to be confuscd with the m odel of uncorrelaled variables used in Beniler-Bonell fit indices), is fitted to the sample data, of which is the em pirical distribution function. T h e model li tting procedure is characterized by the solution 0 = 0 satisfying the equation

F{S^))^min[F(S„Zm],

( 1)

where F{- , ■) is a discrepancy function which measures the discrepancy between its two ordered argum ents; S„ is a p x p symmetric matrix o f the observed variances and covariances; L(0) is a p x p matrix of theoretical

7. BOOTSTRAPPING TECHNIQUES

197

covariances “structured" in term s of 0, 0 is considered lo be a vector o f model param eters, and © is an admissible set o f values for 0. Usually, 0, the solution of 0 in Equation 1 m ust be obtained by iterative procedures. If the value of F(S„,Z(0)) in Equation 1 is d en o ted by r , th en the correspond­ ing m odel test statistic is defined as T = (n - 1) P. For many functions F, in regular situations and u n d er die null hypothesis, T is distributed in large sam ples as a x ' variate. W ithout fu rth e r specification, we assum e that the discrepancy function defined in Equation 1 is the norm al theory maximum likelihood (ML) discrepancy function [i.e.. /^.S,,,!^©)) = logE(0)l - log IS„I + traccO S ^e)-') - p]. T he Bootstrap A pplied to Covariance Structure Analysis T he bootstrap m ethod can be sum m arized in the following three steps: Step 1. Define a resampling space R which usually contains n data points d enoted by yi, , yH. Each y is a p x I vector an d is associated with the original observed data point x, through the same indices for individuals. Step 2. Fix a point mass - for each point in R, draw a sample o f n observations random ly from R wilh replacem ent. T h e vector of the vari­ ances an d covariances of such bootstraj) sam ple is d en o ted by 5Jj, w here j is an index for bootstrap samples drawn from R and the starred symbol signifies its pertinence to die bootstrap sam ple, to be distinguished from the symbol for the original sample. Such starred n o tatio n for bootstrap samples will be used th rougout this chapter. Fit th e covariance structure model Mo to this bootstrap sample by feeding 5T,.j in to Equation 1 using some com puter program (e.g., B ender Sc VVu, 1995) an d obtain 0,* as the so lu tio n for 0. A ccordingly, we have h' = F ($ ‘H,\, 2(0,‘)) a n d h e n c e 7* • (n-lV f. Step 3. R epeat Step 2, B times, an d obtain a set o f b ootstrapped values of param eter estim ates and test statistic: |(0j\ 7J), j = 1, 2, . . . , B|. In the most natural situation, y, is ju s t set to be for all i in the resam pling space R in Step 1. In this case, ihe distribution on R is ju st th e em pirical distribution function. T o distinguish this “n a tu ra l” m ethod from others, we call it the completely nonparametric bootstrap, o r simply Hie bootstrap w hen there is no fu rth e r specification. It is com pletely nonparam etric because the resam pling from R does n o t d ep en d on any assum ption about the distributional family o r on any covariance structure m odel for the data. In contrast, we also define som e "model-based" boot­ strap m ethods that are semi-nonparamelric in natu re, in the sense that R d epends on certain covariance structural inform ation, but without any as­ sum ption about the distributional family for the data. Theoretically, an

198

YUNG AND RENTI ER

ideal (nonparam etric, sem i-nonparam etric) bootstrapping m eth o d would be one that generates all possible sam ples o f size n fr om R. In diis case, B would equal n‘\ th e total n u m b e r o f possible samples from R o f size n each. However, B is then too large to be practical for im plem entation. Moreover, a large n u m b e r o f sam ples would be ill-defined in th e sense that these samples will have singular covariance m atrices an d thus will not be suitable for the fitting o f a covariance structure m odel. T herefore, in practice bootstrap sam pling is carried out with B being chosen to be some hundreds, say 500 o r m ore, and ill-defined bootstrap samples from R must be d ropped without being fu rth er analyzed. In Step 2 an d Step 3, although only 0j's and TJs are listed, it is also possible lo include any function of the bootstrap sam ple data into the list. For exam ple, suppose p is an elem ent o f 0 and th e standard e rro r o f $ can be estim ated by OjJ in the parental sample, then it is also possible to estim ate the standard e rro r oi by 0 3' in the j-th bootstrap samjsle. In this case, the set of bootstrapped values would also contain all th e Op's. F or brevity in presentation, such a possibility had been m ade im plicit in Steps 2 and 3. A final rem ark ab o u t die bootstrap procedures already described is ab o u t the sam ple size for resampling. Typically, the size o f resam pling is set to be the same as the actual sam ple size, as treated in Step 2. But th e re are som e theoretical reasons that bootstrap resam pling based on a fewer sam ple size, say m„ ( m„ < n), would be preferred for som e cases. In fact, using m„ for boot­ strapping would revive som e well-known failure o f die bootstrapping using a resam pling o f size n (e.g., for estim ating the sam pling distribution of the smallest o r the largest o rd e r statistics, see a recent technical p aper by Bickel, Gotze, & van Zwet, 1994, and references therein). However, wc only consider the case o f bootstrapping with size n here. The basic bootstrap principle can be simply slated as: T he relation between a population P an d its samples can be modeled by the relation between the resam pling space /? an d its bootstrap samples. A direct consequence of ihis principle is that the distribution o f the set of bootstrapped values )(0j, 7J), j = 1, 2 ......... B| from /{in Step 3 can be used as an estim ator of the sam pling distribution o f (0, T) from P. Notice that up lo this point, we have n o t assum ed any knowledge about th e population distribution $ (not to be confused with die discrepancy function F). In fact, this is exactly why th e bootstrap is so attractive as an alternative lo fitting covariance structure m odels w ithout the multivariate norm ality as­ sum ption. By doing the bootstrapping procedures described previously, the non-norm ality problem can be taken into account implicitly in the boolstrapping. Because most o f the statistical problem s in covariance structure analysis are about the sam pling properties o f param eter estim ators and the

7. BOOTSTRAPPING TECHNIQUES

199

test statistic, and it is always possible to do bootstrapping based on the parental sample, it seems that the bootstrap principle is general enough to address a wide variety of statistical problems in covariance structure analysis without making any distributional assumption. However, these assertions are based on faith in the bootstrap principle. We purposely italicized the phrase can he modeled in the bootstrap principle to signify that the bootstrap is merely a plausible m ethod, w hether it is good or bad in a given situation. T he real applicability o f the bootstrap to covariance structure analysis, as well as to all other innovative statistical techniques, is subject to examination. In contrast, blind faith can be char­ acterized by replacing the phrase with is in the bootstrap principle. Blind faith in the bootstrap principle unfortunately may lead lo over-optimism regarding the bootstrap results. Therefore, rather than simply advocate the bootstrap without any critical evaluation, wc take a m ore critical stance in the hope that the usefulness of the bootstrap to the analysis of covar iance structures can be grounded appropriately. A Critical Review o f Bootstrap Applications We now look at four main applications of die bootstrap to covariance struc­ ture analysis in the literature. They are: (a) bias estimation, (b) estimation of standard errors, (c) construction of confidence intervals, and (d) model testing. Bias Estimation Usually, the bias of an estimator § for 0 is defined as B(§) - £(§) - 0,

> < ( ! - 4 (81)

220

YUNG AND BENTLER TAP,I F. 7.5 Power C alculations/Estim ation U sing Satorni and Saris (1985) F.xample

(n = 100)

« a « a

= = = =

.001 .005 .010 .025 a = .050 a = .100

limp.

Appr.

'B e s t' 11-Ma

"Best' B-M a

.033 .103 .177 .297 .390 .520

.061 .141 .198 .302 .407 .535

.027 .102 .159 .284 .412 .558

.060 .122 .182 .288 .404 .528

“Ave. " ii -m a .065 .143 .203 .307 .415 .540

(.018) (.022) (.025) (.022) (.022) (.023)

“Ave. ' B-Ma .075 .148 .198 .282 .369 .472

(.083) (.142) (.165) (.201) (.222) (.231)

Note.. F.mp.: F.mpiriral power oh rained from sim ulations by Saiorra and Saris (1985); Appr.: Approxim ate power calculations proposed by Saiorra aijd Saris (J985); T he bootstrap-/WA result with assumed parameter values under HA; B-A4a: T he bootstrap-iVfA I'csult without assum ing parameter values under I1A. “Best11: T he bootstrapping; results o f the sam ple am ong (he total o f ten samples which shows th e closest agreem ent with the empirical power calculation. “Ave.": T he averaged results based on ten random samples. Parenthesized values are standard deviations.

w here d /is the degrees of freedom o f the test statistic, K.!f{ • ) is the distri­ bution function o f the central y? variable with d f degrees of freedom . H ere, d f should be the same as the degrees of freedom in the null m odel. U p to now, we have not m ade any distributional assum ption about the data wilh the bootstrap-.WA m ethod, as is clearly shown in T able 7.4. Instead, the distribution u n d e r the alternative m odel ,ViAis estim ated by the distribution of y„ say $„.A, and has been taken into account implicitly in the bootstrap­ ping procedures. Furtherm ore, we can actually relax one m ore specification of the alter­ native model. T hat is, only the structural equation m odel is hypothesized bill w ithout any specified param eter values for 0A. T his will be o u r “boot­ strap- f a r m ethod introduced here. If the param eter values, say 0A, are not specified, we can get some estimates, say 0A, by using the m inimization using Equation 1 for fitting the alternative m odel A|v However, 0A must be consistent for the m ethod lo Awork appropriately. D enote the estim ated A r r A 1 A ' theoretical m om ents as ZA= Z(0A) an d jxA= |X(0A), th en the resam pling space for bootstrap-,%A m ethod is defined as: « & ) = {* = £& *(*, - *„) + Ma. i = 1, 2 , . . -, n ,

(32)

As in the bootstrap-M A m ethod, the power of the test at a specific a level can be estim ated using E quation 31 after bootstrapping on R($fA) . The estim ator is d en o ted here by W*(iVfA, a ) . We stress that th e two bootstrap m ethods proposed h ere are intended to provide “estim ates” o f pow er because the population distribution has been estim ated implicitly from the data instead o f being assumed. This

7. BOOTSTRAPPING TECHNIQUES

221

approach should be dearly distinguished from the em pirical an d approxi­ m ate calculations suggested by Satorra and Saris (1985) in T able 7.4. If die multivariate norm ality assum ption ta n be assum ed, th e n the pow er calcu­ lation given by the em pirical m ethod used by Satorra an d Saris (1985) is exact (subject to num erical accuracy d u e to sim ulation). T h e approxim a­ tion m ethod, on the o th e r hand, only gives values o f power that are exact for extrem ely large samples, [n order to obtain som e initial evidence about the proposed bootstrap-M A and b o o ts tr a p - ^ m ethods, a small sim ulation was conducted (for n = 100). We did n o t sim ulate the case for n - 600, as was done by Satorra and Saris (1985), because the power at all a levels are essentially too high (all o f them are g reater than .94) to make a useful com parison between m ethods. The setup o f the sim ulation is th e sam e as specified in Satorra and Saris (1985), and the corresponding results here would be com parable to theirs. For the two proposed bootstrap m ethods, the n um ber o f bootstrap replications B was set to be 1000. In C olum n 4 o f T able 7.5, the result o f a single sim ulation using the bootstrap-Af,, is shown. Ignoring the label of C olum n 4 at this m om ent, it seems that the pow er estimates using the booistrap-AtfA are quite close to those o f the em pirical m ethod, which are supposed to be exact. However, the lx>otstrap-AlAdid this w ithout assum ing norm ality (although the data are actually draw n from a multivariate norm al distribution). In C olum n 5, where the result of a smgfesimulaiion using die b o o ts tr a p - ^ is shown, the estim ates of pow er at all a levels still are close to th e em pirical power, except perhaps for tt = .001 (but it is no worse than the approxim ation m e th o d ). However, the b o o ts tr a p - ^ m ethod did this w ithout any knowledge ab o u t the distribu­ tion an d param eter values u n d e r the alternative m odel A/a! Wc have purposely misled o u r readers by showing the most optimistic, results in C olum ns 4 an d 5 out o f ten repeated sam pling from a multivariate norm al distribution. T h at is why we label Best in Colum ns 4 an d 5. In fact, not all ten random sam ples yielded good estimates of pow er using the bootstrap m ethods. Some o f them are really bad, as com pared to the em pirical m ethod. As a usual strategy for studying any innovative estim ation m ethod, we usually assess the quality o f estim ators by looking at their expected values and standard errors. T he last two colum ns in T able 7.5 for ihe two bootstrap m ethods serve such a purpose. In C olum n 6. where the results based on ten repeated samplings for the bootstrap-A/^ are shown, il is observed that the m ean values of th e pow er estim ates are still close to the em pirical m ethod, but it seems that they are m uch closer to those o f the approxim ation m ethod. T he standard errors are quite small, as com pared to the estim ates o f power. T h erefo re, we may conclude lh at the bootstrap-A4a m ethod works fine for power estim ation even w ithout th e nor­ mality inform ation. In contrast, Colum n 7 shows that although the average values o f the power estim ates using the bootstrap-$fA m ethod seem to be

222

YUNG AND BENTLER

close enough 10 either of the empirical and approxim ation methods, their standard errors of estimates are quite large (i.e., they are really not reli­ able). Apparetidy, this is due to the price one pays for not assuming param eter values, as com pared to the bootstrap MA method. Certainly, our simulation results are quite limited because we ju st have 10 repeated samples drawn from a multivariate normal distribution. In addi­ tion, we did not exam ine these bootstrap m ethods under nonnorm al distributions, which is usually more interesting for judging the validity of bootstrap methods. Nonetheless, our results sufficc to show that a single simulation result about any innovative bootstrap m ethod can be misleading. O ur crude sampling experim ent clearly shows that the reliability of the estimates using the bootstrap must also be taken into account. In this respect, we show initial evidence that the bootstrap-MA m ethod may work well, whereas the bootstrap-$fA m ethod may not be reliable enough. Further research should extend the present methodology to nonnorm al conditions and increase the num ber of repeated (independent) samples to have a clearer picture about the proposed bootstrap-MAand bootstrap-^/* methods. A final com ment about the present methodology is best illustrated in Table 7.4. T hat is, the proposed bootstrap m ethods are very much like the empirical method, in which simulation is used for gelling solutions, but notably without a distributional assumption (and param eter values for b o o ts tra p -^ ). Such resemblance to the empirical method suggests the following conceptual formula for the bootstrap-MA: “Bootstrap-AL" = "Empirical Method W ithout Distributional Assump­ tion.” CAUTIONARY NOTES AND CONCLUSIONS T here are some cautionary notes that we have not m entioned explicidy but that are quite im portant for a better understanding of the bootstrap. Let us now have a brief look al these, and then make our final conclusions. 1. Assumption of iid (Imlejjendant and Identical Distributed) Properly o f Obser­ vations. This is a ccntral assumption for the bootstrap to work in the present framework. Such an assumption is essential for justifying the replacem ent sampling from /? o f the bootstrap. Rollen and Stine (1988) illustrated the problem of lacking such iid property for bootstrapping. Had this iid prop­ erty not been true, the bootstrap resampling would have lo be modified. An obvious example that lacks the iid property is multilevel covariance structure analysis. 2. Adequacy of Estimation of Standard Errors by the Bootstrap Does Not Mean Thai II Is the “Best” Method. Although the bootstrap may give accurate csti-

7. BOOTSTRAPPING TECHNIQUES

223

mates of standard errors, diis does not mean that estimation using the bootstrap (couplcd with ccrtain estimation methods) must yield (asymp­ totically) efficient estimates. For example, when applying the bootstrap to the Ordinary Least Squares (OLS) m ethod for model estimation, one may get unbiased estimates of standard errors, but this by no means implies that the bootstrap with OLS estimation is the “best" method available. A m ethod such as ADF (Browne, 1984) may be m ore efficient asymptotically for param eter estimates under arbitrary distributions. The point here is that even if the bootstrap may help a particular estimation m ethod to work m ore accurately un d er a set of less severe assumptions, one may still prefer to use some other estimation m ethod due to the desire to achicvc ccrtain statistical properties like efficiency, robustness and so on. 3. Sample Size Requirement. It must be em phasized that the bootstrap m ethod is not a panacea for small sample sizes. T he reason is obvious bccausc the succcss of the bootstrap depends on the accuracy of the esti­ mation of the parent distribution (a n d /o r under a particular model) by the observed sample distribution. For an acceptable degree of accuracy, one cannot expect a very small sample for the bootstrap to work satisfac­ torily. Such argum ent is supported by the simulation results obtained by Ichikawa and Konishi (1995) and Yung and Bender (1994). In structural equation modeling, perhaps the real advantage of using the bootstrap may be its “autom atic” refinem ent on standard asymptotic theories (e.g., higher order accuracy) so that the bootstrap can be applied even for samples with moderate (but not extremely small) sizes. Now, let us conclude the present chapter with the following points: 1. Applicability of ihe Bootstrap to Covariance. Structure Analysis. Certainly, the bootstrap may yield results that will be useful for covariancc structure analysis and structural equation modeling. However, this conclusion is based on differentia] evidence supporting the use of bootstrapping, as dis­ cussed previously. T he bootstrap principle cannot be blindly trusted be­ cause there are cases in which it will not work (see, e.g., Bollen Sc Stine, 1988, 1993; Bickcl ct al., 1994, for some interesting examples of bootstrap failures). But the worst thing is that it is usually difficult to give, a general rule to predict when the bootstrap principle will fail. More evidence for the validity of the bootstrap should be gathered for different areas of ap­ plications in structural equation modeling. In addition, there are some issues regarding evaluation of bootstrap methods. Since the bootstrap is designed to address situations without strong distributional assumptions, the trade-off is that one can only gel “estimates," instead of exact values in some occasions, as opposed to traditional estimation m ethods with distribu­ tional assumptions. The p level of test statistics and the power of the lest

224

YUNG AND BENTI.F.R

are iwo exam ples of this. In these cases, we must assess the precision (or the variability) o f th e bootstrap estimates. T he bootstrap estim ates must also be precise to be useful. Certainly, the evaluation of the precision o f the bootstrap estim ation cannot be d o n e simply by using a single example o r a single sim ulation. M ore extensive sim ulation studies are need ed . Prac­ tically, it would be nice to have a m ethod for assessing (o r estim ating) the precision o f the bootstrap estim ates in each single application. T h e jackknife-after-bootst.rap m ethod in troduced in F.fron and T ibshirani (1993, pp. 275-280) may provide a useful em pirical tech n iq u e for estim ating the standard erro rs o f the bootstrap estimates (see also Efron, 1992). This would be an im portant aspect, o f the bootstrap that needs to be investigated m ore in structural equation m odeling. 2. Completely Nonparametric Bootstrap Versus Boolstraji-Mi;, and Bootstrap-M\. M odifications in the resam pling space for bootstrapping are need ed in som e applications such as estim ating the sam pling distribution o f a test statistic and the power o f the test against an alternative model. In o th e r situations, it seems that the com pletely non-param etric approach is m ore reasonable an d realistic. 3. I-Iteration Versus Full Iteration. It is safe to use full-iteration m odel fit­ ting procedures within a bootstrapping loop because this does not require m uch additional com puter tim e and appears to give m ore accurate results (e.g., estim ation of significance levels) in practical applications, as com­ pared to the 1-iteration m ethod.

ACKNOWLEDGMENTS This research was supported in pai l by a University Research Council grant o f ihe University o f N orth Carolina al C hapel Hill to the first au th o r, and grants DA01070 and DAQ0017 from the N ational Institute on D rug Abuse to the second author. We thank Dr. K enneth A. Bollen for his useful com­ m ents that helped to correct som e mistakes in an earlier version o f the

chapter.

REFERENCES Rentier, P. M. (1989). EQS structural et/unlinns program yruinutii. Los Angeles, C’A : BMDP Sta­ tistical Software, Inc. BentJer, P. M., & Dijkstra, T. (1985). Efficient estim ation via linearization in .structural models. In P. R. Krislniaiah (E d.), M ultwariatr analysts VI (pp. 9 -4 2 ). Amsterdam: North-Holland. Bentler, P. M., 8c Wu. E. J. C. (1995). l r.QS for windows user’s guitie. Encino, CA: Multivariate Software.

7. BOOTSTRAPPING TECHNIQUES

225

Benin, R., & Srivastava, M. S. (1985). Bootstrap tests an d confidence regions for functions o f a covariance m atrix. Annuls of Statistics, 13, 95-115. Bickel, P .J .. Cdtze, K, & van Zwet, W. R. (1994). Resampling fewer than n observations: Gains, losses, and rem edies for Josses. DiskreteStruklurm in der Mathematik (Preprint 94-084). Bielefeld, Germany: U niversitai Bielefeld. Ballon, K. A., 8 c Stine, R. (1988). HooLstrafyfring structurnt equation models: Variability o f indirect effects and goodness of fu measures. Paper presented at th e A nnual M eetings o f the A m erican Sociological Association, A tlanta, GA. Bollen, K. A., & Stine, R. (1990). Direct and indirect effects: Classical and bootstrap estimates o f variability. In C. C. Clogg (Ed.), Sociological methodology (pp. 11.5-140). O xford: Basil Blackwell. Bollen, K. A., 8 c Stine, R. A. (1993). Bootstrapping goodness-of-fit m easures in structural equation models. In K. A. Bollen 8 : J. S. Long (Eds.), Testingstructural equation modelv (pp. 111-135). Newbury Park, CA: Sage. Boomsma, A. (1983). On the robustness of L1S1U.L (maximum likelihood estimation) against small sample size and non-normality. Amsterdam; Sociometric Research Foundation. Boomsma, A. (1986). O n the use o f bootstrap and jackknife in covariance stru ctu re analysis. Compsiat 1986, 205-210. Browne, \1. W. (1984). Asymptotically distribution-free m ethods lor analysis o f covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 62-83. Browne, M. VV\, 8c Cudeck, R. (1989). Single sam ple cross-validation indices for covariancc structures. Multivariate behavioral Research, 24. 445-455. C hatteijee, S. (1984). Variance estimation in factor analysis: An application o f th e bootstrap. finiish Journal of Mathematical and Statistical Psychology, 37, 252-262. Cudeck, R., 8 c Browne, M. W. (1983). Cross-validation o f covariance structures. Multivariate Ikhcwioral Research, IS, 147-167. Cudeck, R., 8 c Henly, S. J. (1991). Model .selection in covariance structures analysis and the "problem ” o f sam ple size: A clarification. Psychological Bulletin, 109, 512 519. Dalgleish, L. I. (1994). D iscrim inant analysis: Statistical inference using the jackknife and bootstrap procedures. Psychological Bulletin, 116, 198 508. Dietz, T.. Frey, R. S., 8 c Kalof, I.. (1987). Estimation with cross-national data: R obust and nonparam etric m ethods. American Sociological Review, 52, 380-390. Dijkstra, T. K. (1981). Latent variables in linear stochastic models. G roningen: Rijksu niversi lei I . Efron, B. (1979). Bootstrap m ethods: A nother look at the jackknifc. Anncds nf Statistics, 7, 1-26. Efron, B. (1982). I he jackknife, the bootstrap and other resampling plans. Philadelphia: SIAM. Efron, B. (1985). Bootstrap confidence intervals f o ra class o f param etric problem s. Biometrilui, 72, 45-58. Efron. B. (1987). Better bootstrap confidence intervals (wilh discussion). Jo u m a lo f the American Statistical Association, 82, 171-200. Etron, B. (1988). Bootstrap confidence intervals: Good o r bad? Psy.hologic.al Bulletin, KM, 293-296. Efron, B. (1992). Jackknife^aftei-bootstrap standard erro rs and influence functions. Journal of the Royal Statistical Society: Series B, 54, 83-127. E fron, B., 8 c Gong, G. (1983). A leisurely look at th e bootstrap, the jackknife an d cross-validation. American Statistician, 37, 36-48. Efron, B., & Tibshirani, R. (1986). BooLSLrap measures for standard errors, confidence inter­ vals, and o th e r measures o f statistical accuracy. Statistical Science, I, 54-77. Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: C hapm an 8 c Mall. Holzinger, K. )., 8 c Swineford, F. (1939). A study in factor analysis: T h e stability o f a bi-factor solution. Supplementary F.ducational Monographs, 48, 1-91.

226

YUNG AND BENTLER

Ichikawa, M„ & Konishi, S. (1995). Application o f the bootstrap m ethods in factor analysis. Psychometrika, 60, 77-93. Jorgensen, M. A. (1987). Jackknifm g fixed points o f iterations. Biometrika, 74, 207-211. Kotz, S.. & Johnson, N. L. (1992). Breakthrougfis in statistics: Volumes 1 and 2. New York: Springer-Verlag. Lam bert, Z. V., W ildl, A. R., Sc D urand, R. M. (1991). A pproxim ating confidence intervals for factor loadings. Multivariate Behavioral Research, 26, 421-434. L inhart, H., & Zucchini, W. (1986). Model selection. New York: Wiley. I.iinneborg, C. F. (198.5). Estimating th e correlation coefficient: The bootstrap approach. Psychological Bulletin, 9H, 209-215. Mooney, C. Z., Sc Duva), R. D. (1993). Bonlstrapjnng: A nonparametric apfrronch to statistical inference. Newbury Park, CA: Sage. Rasmussen, J. L. (1987). Estimating the correlation coefficient: Bootstrap an d param etric approaches. Psychological Bulletin, 10/, 130-139. Rasmussen, J. L. (1988). “Bootstrap confidence intervals: G ood or b ad ”: C om m ents on F.frou (1988) and St rube (1988) and further evaluation. Psyt:hok>gv:al Bulletin, 104, 297-299. Satorra, A., Sc Saris, W. E. (1985). Power o f the likelihood ratio test in covariance structure analysis. Psychcmetrika, 50, 83-90. S chenker, N. (1985). Qualm s abo u t bootstrap confidence intervals. Journ/d of the American SUiiisliml Association, 80, 360-361. Sorbom , D. (1974). A general m ethod for studying differences in factor m eans an d factor structures between groups. British Journal of Mathematical and Statistical Psychology, 27, 229239. Siine, R. A. (1989). An introduction to bootstrap m ethods: Examples and ideas. Sociological Methods and Research, H, 243-291. Strube, M. J. (1988). Bootstrap type I erro r rates for the correlation coefficient: An exami­ nation o f alternate procedures. Psychological Bulletin, 104, 290-292. Young, G. A. (1994). Bootstrap: M ore than a stab in the dark? (wilh com m ents). Statistical Science, 9, 382-415. Yung, Y.-F., Sc Bender, P. M. (1994). Bootstrap-corrected ADF test statistics in covariance structure analysis. British Journal of Mathematical and Statistical Psychology, 47, 63-84. Yung, Y.-F. (1994). I'in He mixtures in conJhm/Uory fa/U/runafylic models. U npublished doctoral dissertation, UCLA. Zhang, J., Pantula, S. G., & Boos, D. D. (1991). Robust m ethods for testing th e pattern of a single covariance matrix. Biometrika, 7H, 787-795.

C H A P T E R

E I G H T

A Limited-Information Estimator for LISREL Models With or Without Heteroscedastic Errors K e n n e th A. B o llen University o f Nmth Carolina, Chapel Hill

T he estim ation of structural equation models (SEMs) is m arked by two traits. O ne is th e use o f full-inform ation estim ators such as m aximum likelihood (ML) or generalized least squares (GLS). T he o th e r is th at the derivations o f these estim ators assumes that the variances o f the disturbances or errors in each equation arc constant across observations, th at is, they are ho­ moscedastic. T his chapter has two prim ary purposes. First, I present an alternative two-stage least squares (2SLS) estim ator and its asymptotic standard errors for th e coefficients o f 1.1SR K Im odels developed in Bollen (1995, in press) u n d er the assum ption o f hom oscedastic errors. Second, I apply results from econom etrics an d sociotnelrics to expand the m odel to allow for heterosc.eclasticity o f the disturbance term . This is do n e by provid­ ing heterosccdastic-consistcnt standard errors and developing alternative estim ators that allow known or unknow n form s of heteroscedasticity. T he 2SLS estim ator recom m ended is a limited-i 11form ation estim ator in th a t researchers estim ate coefficients o n e equation at a time. T h e fullinform ation estim ators (e.g., ML) that dom inate th e SF.M field estimate all param eters in all equations sim ultaneously. This is both an asset and a liability. It is an asset in th a t inform ation from th e whole system can im prove the efficiency o f the estim ator. However, a key drawback is that ‘T h e term U SIIKL is used in the generic sense o f SEMs with latent variables that include confirmatory factor analysis, recursive and non recursive equations, and sim ultaneous equation m odels as special cases.

227

228

BOU.EN

specification erro r in one pai l o f the system can bias coefficient estim ates throughout the system. Lim ited-inform ation estim ators som etim es b etter isolate the biases d u e to specification error. Given that virtually all SEMs arc misspccificd, lim ited-inl'ormation estim ators are a viable option to pur­ sue. H eteroscedasticily o f errors o r disturbances is scarcely discussed in the SEM literature outside o f the special case o f econom ctric ancl regression models. T his refers to th e problem o f erro rs o r disturbances in eith er the laten t variable or m easurem ent m odels having variances that, differ across obseivations. T hat is, the same disturbance m ight have a variance of 10 for the first case, a variance of 5 for th e second case, and so on. T h e pr oblem is not unusual in regression applications an d there is n o reason to think that it is less com m on in the m ore general SEM models. Homoscedasticity, o r equal variances o f erro rs an d disturbances across observations, is implicit in the LISREL m o d e l/ H eteroscedastic disturbances raise doubts about ihe appropriateness of the significance tests derived from the ML estim ator. It is possible to find helei oscedaslic-consislent asymptotic standard errors for the 2SLS estim ator o f LISREL models. F urtherm ore, the 2SLS estim ator can take into account heteroscedasticiry so as to increase its efficiency, an option that is n o t yet developed for the full-information estim ators o f LISRFI. models. T h e first section briefly reviews the literature o n lim iied-inform adon estim ators. T he next section presents th e notation, m odel, an d the 2SLS estim ator developed in Bollen (in press). T h e th ird section generalizes the estim ator to allow hetei oscedaslic.ity o f errors in the m easurem ent or latent variable m odel. I then present a sim ulation exam ple, an d the last section in the ch a p te r contains th e conclusions.

LIM ITED-INFORM ATION ESTIMATORS IN LISREL MODELS T he literature on lim ited-infonnation estim ators in sim ultaneous equation m odels and econometric, applications is vast an d 1 d o n o t review it here (see, e.g., Bowden & T urkington, 1981). T he research on limited-information estim ators o f factor analysis o r LISREL m odels is less extensive. Madansky (1964) suggested an instrum ental variable (IV) estim ator for the factor loadings in factor analysis models. H agglund (1982) an d Joreskog (1983) proposed IV and 2SLS estim ators for factor analysis m odels with u n co rre­ lated errors o f m easurem ent. O th e r studies have exam ined issues such as scaling and com putational algorithm s (Ctideck, 1991; Je n n rich , 1987), alaAn explit/il statem ent o f the assum ptions o f hotnoscedasUcily is in Bollen {1989, pp. M - l b , 18).

229

8. HETEROSCEDASTICITY IN LISREL MODELS

lowing correlated errors of m easurem ent (Bollen, 1989, p. 415), an d p ro ­ viding an overall lesl statistic for a factor analysis estim ated by IV (Satorra & Bentler, 1991). G erbing and H am ilton (1994) provided an iterative lirnited-inform ation estim ator for factor analysis m odels w here all variables have a factor complexity of one. T he M onte-Carlo evidence to date shows that the IV and 2SLS estim ators perforin well in factor analysis models (Brown, 1999; H agglund, 1983; Lukashov, 1994). Less fre q u en t is work on lim ited-inforinalion estim ators for the latent variable m odel (o r “structural m odel"). Joreskog an d Sorbom (1986) an d Lance, Cornwell, and Mulaik (1988) proposed estim ators for the latent variable m odel. They first estim ated the covariancc (correlation) matrix of the latent variables by estim ating die m easurem ent m odel and then they used the form ulas for 2SLS applied to the covariance matrix of the latent variables. A drawback o f these estim ators is th at little is known ab o u t the distribution o f the coefficient estimators. T h e analytical results for the distribution arc com plicated by the use of an estim ate o f the covariance (correlation) m atrix of the latent variables for th e estim ates o f the coeffi­ cients o f the latent variable equation. In addition, in th e case of jo resk o g and Sorbom ’s (1986) estim ator, it d epends on having u ncorrclatcd errors of m easurem ent in the m easurem ent model. T h e nex t section describes an alternative 2SLS estim ator that does not have these restrictions.

M ODEL AND ESTIMATOR T his section p rese n ts the m odel an d 2SLS estim ato r for Ihe LISREL model u n d e r the assum ption o f hom oscedastic errors. It draws heavily from Bollen (1995, in press). T h e m odel is presented in LISREL m atrix notation ( Joreskog & Sorbom , 1986), although the same estim ator applies when using o th e r notational systems. T he laient variable model is: H = a + B n + !'% +

C,

(1)

where n is an m x I vector of latent endogenous random variables, B is a m x m m atrix of cocfficicnts th a t give the im pact of the i)s on each other, c, is an n x 1 vector of latent exogenous variables, V is th e m x n coefficient matrix giving | ’s im pact on n, a is an m x 1 vector o f in tercep t term s, an d Q is an m x 1 vcctor o f random disturbances with the E(Q = 0 an d COV(q, £') = 0. Assume for now that the disturbance for each equation is ho­ moscedastic and nonautocorrelated across observations. Two equations sum m arize the m easurem ent model of th e SEM: x =

tx +

A,£; + 8

(2)

230

BOLLEN

y = Xy + A ^ + c

(3)

where x is a q x 1 vector o f observed indicators o f q, A, is a q x n matrix o f “factor loadings” (regression coefficients) giving the im pact of % on x, t x is q x I vector o f intercept term s, an d 8 is a q x 1 vector of m easurem ent errors with F.(8) = 0 an d COV(£,, 8') = 0. T h e 6, for th e \th equation is hom oscedastic and n o n autocorrelated across cases. Similarly in Equation 3 y is a p x 1 vector of indicators o f T|, \ is the p x m matrix o f factor loadings, zy is a p x 1 vector o f intercept terms, and s is a p x I vector o f errors with E(e) = 0 and CO V (n, e ') = 0. A nd £j is hom oscedastic and n onautocorrelated across observations. A nother assum ption is that e, 6, and C, are mutually uncorrelated. Suppose th a t for scaling purposes th e m odel has o ne indicator per laten t variable for which its factor loading is set to o n e and its intercept is set to zero (see Bollen, 1989, pp. 350—352). Assume lhat the scaling variable is only influenced by a single laten t variable an d an e rro r term. A lthough most m odels will satisfy this condition, there are some, such as m u kitrait-m ultim ethod models, th a t will not. I d o not consider such cases here. We begin by sorting the y and x vectors so th at the indicators th a t scale the laient variables com e first. T h en we can create partitioned vectors for y and x: >1

and x =

y2

*5

w here yi is the m x 1 vector o f y’s lhat scale r], y2 consists o f the (p-m) x 1 vector o f rem aining y variables, x, is th e n x 1 vector of x ’s th at scale %, an d x 2 is the (q-n) x 1 vector o f rem aining x variables. This m eans that y, = n + £ ,

(4)

or n=y.

(5)

x, = ^ + 8 ,

(6 )

and

or 4 = x ,-8 ,

(V )

8. HETEROSCEDASTICITY IN LISREL MODELS

231

w here e, and 8, contain th e errors th a t correspon d to y , an d x,. Substituting E quadons 5 and 7 into E quation 1 leads to: y, = a t By, t Fx, + u

(8)

where u = e, - B e , - TS, + N ote that these m anipulations recast the latent variable m odel into a sim ultaneous equation m odel where all vari­ ables are observed except for the com posite disturbance term . T he m ain difference is th a t in general u an d x, are correlated rath e r th an u n co rre­ lated as is assum ed in econom etric sim ultaneous equation models. In some models, all or a subset of the variables in x, could be exogenous and hence uncorrelated with u. C onsider a single equation from E quation 8. R epresent the i'h equation from y, as: >' = a, + Bj', + I > , + Uj

(9)

where y, is the i;h y from y,. a, is die corresponding in tercept, B, is the ith row from B, F, is the ilh row from F , and u, is the i'1' elem ent from u. Define A; to be a colum n vcctor that contains a. an d all of the nonzero elem ents of B, and F , strung to gether in a colum n. Let N equal the nu m b er o f cases and Z, be an N row m atrix th a t contains Is in the first colum n and the N rows o f elem ents from y, and x, that have non zero coefficients associated with them in the rem aining colum ns. T h e N x 1 vcctor y; contains the N values o f y, in the sample and u, is an N x 1 vector o f the values o f u ;. T hen we can rewrite Equation 9 as: y,= Z A + ut

(10)

In all but exceptional situations, at least som e o f th e variables in Z, will be correlated with u, and ordinary least squares (OLS) is inappropriate to estimate die coefficients in Equation 10. However, th e 2SLS estim ator p ro ­ vides an alternative consistent estim ator o f A,. T o apply SSLS we must find IVs for Z,. T h e IVs m ust be: (a) correlated with Zi, (b) uncorrclatcd with u,. an d (c) at least as m any IVs as there are variables in Z,. T he pool of potential I V s com es from those ys and xs not included in Z (excluding, o f course, y:) an d any variables in Z that are uncorrelated with u,. We can check condition (a) by looking al the sam ple correlations between the potential IVs and Z,. Identification requires that the third condition be satisfied an d it is a sim ple cou n tin g rule th a t is easy to check. T he sccond condition is m ore difficult to establish and the full model structure is essential in evaluating it. Recall that u, equals (e, - B,e1 - r \ 5 , + Q . T h e IVs m ust be uncorrelated with each com ponent, in the

232

BOLLEN

com posite. T he B,r., term rules o u t using ys that both scale the latent variables and that have a nonzero impact on yr In addition, because of B,!',, we cannot use ys as IVs that have correlated errors o f measurement, with those ys in y, that ap p e ar in the y%equation. T he r,8 , term rules out xs as IVs that scale the latent Ep an d that have a nonzero direct im pact on y,. In addition, any xs wilh correlated errors o f m easurem ent with such xs lhat ap p ear in the y; equation cannot be IVs. Finally, the Q in th e u, elim inates any ys as IV that correlate with This means, for exam ple, that ys that arc indicators of T|s th at are influenced by T|, would be ineligible as IVs. If any doubl o n the suitability o f an IV, say v’i, rem ains, then the researcher can check its correlation by determ ining the COV(vi( u,). T h e equadon for u, should be substituted in an d the reduced form for v, should replace v,. T h e redu ced form is the equation for v, with only exogenous and erro r variables on th e right-hand side. T h en the researcher can find w hether the C O V ty, u,j is zero. See Kollen (1995, in press) for fu rth e r discussion o f the selection o f IVs in these models. For now assume th a t we collcct all eligible IVs for Z, in an N row matrix V,. T h en the first stage o f 2SLS is to regress Z, on V, w here Equation 11 provides the coefficient esdm ator:

ow v /z,

(ii)

Zj = V,(V'V,)-’V[ Z,

(12)

A

Form Z, as:

A

T he second stage is the OLS regression of yt on Z} so that A

A

A

A

A, = (Z 'Z ^ 'Z 'y ,

(13)

T he 2SLS estim ator assumes that: p l i m ( j , V ' Z ^ I VA

(14)

pkm('-y;V)=Zvy and

(15)

pUm(-sy;

Uj)

=0

(16)

where plim refers to the probability limits as N goes to infinity of the term in parentheses. T he right-hand side matrices of E quations 14 through 16 are finite, l y V is nonsingular, an d ZyZ is nonzero. We assum e that E[u,u/] = o;j,I and E(iij) = 0.

8. IIETEROSCEDASTICITY IN LISREL MODELS

tor

233

U nder these assum ptions, the 2SI.S estim ator, of A,.Assume that:

is a consistent estim a­

^ Z > ,- A N ( 0 ,a ^ )

(17)

pi>milN z ; z y = ^

(i8 )

w here A N (0 , o l i^ J ) refers to an asymptotic norm al distribution. With these assum ptions, th e asymptotic distribution o f A, is norm al with a covariance m atrix of of; £^2. . T h e estim ate of the asymptotic covariance m atrix is: acov(A() = o l (Z-'Zj) 1

(19)

w here ^ ^ (y i-z A r^ -z A V N

(20)

and acov signifies the sam ple estimate of the asymptotic covariance. T he square root o f the m ain diagonal o f E quation 19 gives an estim ate o f the asymptotic standard errors of the coefficient estimates. It should be noted lhat we have n o t assum ed th at the observed variables (xs, ys, o r zs) are norm ally distributed. So the 2SI .S estim ator is applicable even for some observed variables that com e from n o n norm al distributions. Estimates o f the intercepts and factor loadings o f the m easurem ent m odel follow an analogous procedure to that of the latent variable model. M ore specifically, consider the m easurem ent model for x in Equation 2. Substitute Equation 7 lo r c, into Equation 2, which leads to: x = t, + / \ x , - A^S, + 5

(21)

Sincc x,, the scaling variables, com e first in x, th e first n rows o f t , are zero and the first n rows an d n colum ns o f A„ form an identity matrix. Choosing one of th e xs from the x2 vector o f nonscaling xs leads to: x, -

+\

xi + d;

(22)

w here d, equals ( - A^S, + 5|), t „. is the in tercep t for th e x, equation, A„. is ihe ith row o f A,, 8, is th e e rro r o f m easurem ent for x,. Define C, to Ik : a colum n vector that contains t „. and all of th e nonzero factor loadings in A*, put together in a colum n. Nl is the n um ber of observations an d lei W; be an N row matrix that contains Is in the first colum n an d the N rows of elem ents from x, that have nonzero iactor loadings associated with them

234

BOILF.N

in the rem aining columns. T he x* vector i s N x l an d contains the N values o f x, in the sample an d dj is an N x 1 vector o f th e values of d,. T hese definitions are sim ilar to those for the latent variable model. In the m easurem ent m odel for x, they lead to xi=

W jC j

+

d (

(2 3 )

E quation 23 is analogous to E quation If). In fact, the 2SLS estim ator applies to this equation as it did to E quation 10. T h e m ajor difference is th at a researcher m ust select IVs that are correlated with W; and uncorrelated with dj. T he 2SLS estim ator for the x m easurem ent m odel is: C = (W'W.y' Wj’xj

(24)

where V fV 'V y V 'W , T he V, is the m atrix of IVs that are suitable for the \th equation o f the m easurem ent m odel for x. T h e p rocedure is so similar to lhat described for the latent variable m odel that it need not be described any fu rth er here. In addition, the same steps apply to estim ating the equations for the m easurem ent m odel for y (see E quation 3). T h e results up lo this point provide the m eans to estim ate m agnitudes an d asymptotic standard errors for intercepts an d coefficients for all the equations in th e latent variable an d in the m easurem ent model. See Bollen (in press) for a discussion o f a m ethod to estimate th e variances and covariances o f the laient exogenous variables (%), of the equation disturb­ ances (Q , or o f th e errors o f m easurem ent (e, 8).

HETEROSCEDASTIC DISTURBANCES So far we have assum ed lhat the disturbances of each equation are hc>moscedastic— have the same variances across observations. For some crosssectional or longitudinal data, this may be too restrictive an assumption. In this section, hctcrosccdasticity, its consequenccs, m ethods to estimate heteroscedaslic consistent standard errors, and e s tim a to r that take account of heteroscedasticity are discussed. A latent variable m odel is developed, but it is easy to see that an analogous m ethod applies to the m easurem ent model. A useful starting point is to retu rn to the previous m odel com ponents of the disturbance u; from the y, E quation 9:

235

8. HF.TEROSCEDASTIC1TV IN LISREL MODELS

Uj = e, - p.c, - r,8, + C;

(25)

Heteroscedasticilv can enter u, in several ways. If the y, variable that scales r|, has a heteroscedastic m easurem ent error, ei; then this leads n; to be heteroscedastic even if all the other com ponents in Equation 25 are ho­ moscedastic. An interesting implication of this is that the choice o f scaling variable can determ ine w hether hcteroscedasticity is present. Thus, a con­ sideration in choosing a scaling variable is w hether its m easurem ent error is homoscedastic. The and 1^8, terms m ean that heteroscedasticity in the m easurem ent errors for the other scaling variables that en ter the y, equation also can create heteroscedasticity of u,. Finally, heteroscedaslicily of the original error in the latent variable equation, C,,, means heteroscedasticity. This latter heteroscedasticity is not influenced by the choice of scaling indicators. Two consequences follow the presence of heteroscedastirity in u;. O ne is that it is possible to develop a 2SLS estimator with a “smaller" asymptotic covariance matrix. T he other is that the usual uncorrected asymptotic covariance matrix for 2SLS is incorrect and this could lead to inaccurate statistical tests. f discuss the second point first. Define the N x N nonsingular covariancc matrix of u, (from Equation 10) to be Sij(E(uiu i') -- Qj). In the case o f homoscedasticity £2; = ouI where I is an N x N identity' matrix. In Equation 17 the asymptotic covariance matrix of A, under homoscedasticity is: ] A A N - '< p l im |- Z ;Z ,

(26)

Following liollen (1984, p. 4) the asymptotic covariance matrix of A; with heteroscedasticity is: AA (z-X ACOV(A.) = N-1 plim N V

n>N Z/Q.Z,

1 plim

N

,A A. Z[Z. plim N J \ /

(27)

where ACOV is the population asymptotic covariance matrix. Equation 27 simplifies to Lquation 26 when il, = o*I. More generally, it shows that the homoscedasticity derived asymptotic covariance matrix departs from the one appropriate for hctcroscedastic disturbances. Eicker (1963), H orn, H orn, and Duncan (1975), and White (1980), am ong others, suggested a heteroscedastic consistent asymptotic covariance matrix for OLS regression models. Here, I use W hile’s (1982) extension of these results to the 2SLS estimator that allows a heteroscedastic-consistent estimator of the asymptotic covariance matrix with unknown forms of

236

BOl.l.FN

s'

pliv( v x

M

heteroscedasticity. W hite (1982) began wilh a set o f less restrictive assum p­ tions th an those I used ea rlier in this ch a p te r. A ssum e th at Zj a n d ii, consist o f in d e p e n d e n t but n o t (necessarily) identically d istrib u ted ra n d o m vari­ ables. T his assum ption allows heteroscedasticity o f u*. E arlier i assum ed that < v ;v ]

( VI'uI)

if

=0 N \ / (see E quations 14 th ro u g h 16). T h a tis, I assum ed th a t th e m o m e n t m atrices o f th e in stru m e n ta l variables V, an d Zj, o f V, an d an d o f Vt a n d Uj sto­ chastically converged to fixed m atrices. W hite replaced these assum ptions with the less restrictive ones o f u n ifo rm b o u n d ed n e ss o f the e rro r variances, th e cross-m om ents o f th e elem en ts o f Vj a n d Zj, an d th e m o m en ts o f ihe elem ents o f Vj w ith itself. Also assum ed is that th e m ean across observations o f the cross-m om ent m atrix o f V; a n d Z* has uniform ly full co lu m n rank, th a t the m ean m o m e n t m atrix o f V, with itself is uniform ly positive definite, an d th a t EfV /u,) - 0 for all observations. T hese assum ptions are less d e­ m an d in g th an the previous ones in lh a t th e cross-m om ent a n d m o m en t m atrices n ee d n o t converge to fixed m atrices an d th e TVs, V,, a n d th e dis­ turb an ces u, can have different distrib u tio n s across observations. U n d e r g eneral co n d itio n s (see W hite, 1982, pp. 487-490) th e 2SLS es­ tim ator, A;, has a heleroscedastic co nsistent asym ptotic covariancc m atrix th a t is estim ated by:

, A

A .-]

Z'jZj

acov(Aj) =

N

= £ w an d plim

ysLs u ‘i^ ' Uz ,'J N

A

A .-1

Z 'jZ jN

N

(28)

w here ^ = V .^ V .'V ,) 'V j'Z j, u(J = Yy - Z , ^ , , j = 1, 2 , . . . , N in d ex es th e obser­ vations in a sam ple, an d th e low er case ‘‘acov” stands for th e sam ple estim ate o f the asym ptotic covariance m atrix. A lthough E q u atio n 28 is a co nsistent estim ato r o f th e asym ptotic covari­ ance m atrix, th e re may be o th e r consistent o n es with b e tte r finite sam ple properties. O n e issue is th a t u? may n o t be th e best estim a to r to use given th e relation betw een n, a n d u,; u,=x-zA = ZjAj + U; - ZjA,

= Zi(Ai-f)/+ui = U j-Zj(Zj'Z,)-'2/U j

= ( I - Z j ( 2 j 'i ) - 'i > , A

= Huj

(29)

8. HETEROSCEDASTICITY IN LISREL MODELS A

A

A

237

A

w here H - (I Z ^Z 'Z ,) 1Z '). An estim ator that takes account o f the relation between a, and u m ight b etter estimate the variance o f u,j than u 2. Analo­ gous co n cern s in the con tex t o f developing heteroscedastic-consisl.ent standard e rro r for OLS regression has led H orn et al. {1975)an d MacKin­ non an d W hite (1985) to propose o th e r estim ators o f the disturbance vari­ ances for each observation th a t have b etter finite sample perform ance than using ujj. T he same m ight be true in this context. R ather th a n estim ating a heteroscedastic-consistent asymptotic covari­ ance m atrix o f the 2SL.S estim ator, it is possible to develop an alternative estim ator that incorporates unknow n heteroscedasticity. W hite (1982, p. 491) called this the two-stage instrum ental variable (2SIV) estim ator and it is: Ai2Slv = ( Z / v f r W 'Z ,'V & 'V T X

(30)

w here 12, is a diagonal m atrix containing u? in the m ain diagonals. It has an estim ated asymptotic covariance m atrix of: acov(AiJslv) - (Z,'V,(Vi'Si,V.)‘iV;Z,)-'

(31)

T he 2SIV estim ator is generally asymptotically efficient relative to 2SLS u n d e r heterosccdastic disturbances (W hite, 1982, p. 492). Analysis can es­ tim ate u|b y applying ihe usual 2SI.S to the m odel an d using these residuals to form u2. A n o th er alternative available when a researcher knows £2, o r can con­ sistently estim ate it is GLS approach to 2SLS (Bollen, 1984; Bollen & Kmenta, 1986; Bowden & T urkington, 1984). T h e procedure is to: (a) estim ate th e reduced form o f Z, on V, using £2,' (or ) as a w eight in a weighted least squares p rocedure,(b) estimate the second-stage structural equation with O f1 (or i V ) as a weight substituting Z, for Z w here Z, = V;l l an d I I is ihe m atrix o f reduced form coefficients from (a). More formally, (32) where z = v ^ v u - v y v & 'z , A

T he estim ate £2r' replaces £2; in the com m on case lh at ihe latter is n o t available. T he estim ated asymptotic variance o f this coefficient estim ator is s2 (Z'iij 'Z’) 1 w here s2 estim ates the residual variance. T h e estim ate £ 2 ' replaces 12^: when 12, is not available. A valuable feature o f th e A jC2SLS is

238

BOLLEN

that this estim ator also applies lo aulocoi related disturbances. Because the focus of the chapter is heteroscedasticity, I d o n o t pursue the issue of au to ­ correlation here. In summary, this scction proposes several approaches to the problem of heteroscedasticity in latent variable m odels estim ated with 2SLS. O ne is to estim ate a heteroscedastic-consistent covariance matrix for th e coef­ ficients. A nother is the 2SIV estim ator of W hile (1982) lhat corrects for heteroscedasticity o f an unknow n form. T he last is a GLS analog for 2SI.S, G2SLS. These alternatives arc illustrated in the n ex t section.

EXAMPLE T o illustrate the estim ators o f the last section, 1 sim ulated a data set with the following structure:

2| C

+ *1 = 5 + *1 - 1 + 8, X, = 5 + + = 1 - 8, >'l = q + £i

S 8i

% = 5 +

r| + e2

= 5 + q+£3

w here £ = 8^ = §.£, 8S = 8j^, e, = e-£, and 8, 8',, 8;, e,, e2, e£ are gener­ ated as indep en d en t, standardized norm al variables using the “rn d n " func­ tion in GAUSS (Aplech Systems, 1993). T h e £ is a nonstochastic variable whose initial values are generated with a norm al variable generator. This sim ulation leads to heteroscedasticity in the disturbances for the first, third, fourth, an d seventh equations in E quation 33. C onsider the latent variable equation that is the first equation in Equa­ tion 33. Substituting (x, - 8,) for ^ an d (j, - £,) for q leads lo: >>, = 5 + 2x, - 28, + e, + C

(34)

The I V s for x , in this equation are x 2 and x-,. T able 8 .1 provides the 2SLS, 2SIV, an d G2SLS estimates for the latent variable m odel at the top o f Equa­ tion 33 using Equation 34 as the observed variable form o f the equation. T h e intercept and coefficient estim ates a te fairly close across estim ators and are close to the population param eters for the equation. Because all estim ators are consistent estim ators u n d e r hcteroscedastirity, this is not surprising. T h e estimates o f th e asymptotic standard errors are in paren­ theses (). Those for the 2S1.S estim ator would generally be inaccurate due

239

8. HETEROSCEDAS'nCITY IN LISREL MODELS TABLE 8.1 2SLS, 2SIV, and G2SLS, C oefficient Estimates and Estimated Asymptotic Standard Errors for L atent Variable Equation U sing Sim ulated Data

(,V = 200) T| 2 S IS

2S1V

C 2 S IS

Intercept

5.041 (0.212)* [0 .2 1 2 ]11

5.038 (0.210)

5.146 (0.192)

1

2.273 (0.245) [0.302]

2.275 (0.301)

2.187 (0.295)

■'estimated asymptotic standard error ^estimated heteroscedasTic-vonsislent standard error

to ihe heteroscedasiicity o f the disturbance. T h e heteroscedastk-corisistcnt estimates of the asymptotic standard errors for th e 2SLS estimates in the first colum n are in brackets [J. In this ease ihe u n co rrcctcd standard errors are smaller than the corrected standard errors for the regression coeffi­ cient, but not for the intercept. In o th e r exam ples I found th at the u n co r­ rected standard errors are generally sm aller than the corrected ones for 2SLS. T h e heteroscedastic-consistent asymptotic standard errors for 2SLS are close to die standard errors for 2SIV and G2SI.S estimates. I observed a sim ilar p attern in o th e r exam ples with th e G2SLS standard errors usually the smallest. A far m ore extensive M onte-Carlo sim ulation study would be required to determ ine w hether general relations h o ld betw een these esti­ mates and their asymptotic standard errors.

CON CLU SIO N S N o SEM package fails lo include full inform ation estim ators such as ML or GLS. Rut most om it an op tio n for lim iled-inform ation estimators. A m ajor exception is th e jo resk o g an d Sorbom (1986) LISREL software. Even LISREL, however, does not provide asymptotic sta n d ard errors for th e limited-inform ation estimators an d the LISREL lim ited-inform ation estim ator differs from that described here. Given the com putational advantages, the elim ination o f convergence problem s, the decreased sensitivity to specifi­ cation errors, and the possibility o f significance testing u n d er n o n n o rm al­ ity, the 2SI*S estim ator presented here has great potential. F urtherm ore, I explained how this estim ator could be adapted to handle heteroscedastic disturbances o r errors. O n e m odification I p resented was

240

BOLLEN

heleroscedastic-consistent asymptotic standard errors. Another is to apply estimators that correct for unknown or known forms of heteroscedasticity using 2SIV or G2SLS. The latter ability, to have an estimator that takes account of hctcrosccdasticity, is one that is not yet available for full-infor­ mation estimators of LISREL models. In oilier work, I show that equation-by-equation tests of overidentification apply to the 2SLS estimator (Bollen, in press). These should prove helpful in better isolating problems in the specification of a model. A different paper shows that the 2SLS estimator easily applies to models with interactions or other nonlinear functions of latent variables (ftollen, 1995). Although the features of the 2SLS estimator sound extremely promising, I caution the reader that these properties are asymptotic. Analytical and simulation results should illuminate the finite sample behavior of the es­ timator and asymptotic standard errors. Then we will have a better sense of the practical advantages of the 2SLS, 2SIV, and G2SLS estimators.

ACKNOWLEDGMENTS I gratefully acknowledge support for this research from the Sociology Pro­ gram of the National Science Foundation (SES-912I564) and the Center for Advanced Study in the Behavioral Sciences, Stanford, California.

REFERENCES Aptech Systems. (1993). GAUSS: 77it GAUSS System Version 3.1. Maple Valley. WA; Author. Bollen, K. A. (1984). A note on 2S1S under hxierosf.ed/istic and/or autoregressive disturbances. Paper presented at the annual American Sociological Association Convention, San Antonio, TX. Bollen, K. A. (1989). Structural equations with latent variables. Now York: Wiley. Bollen, K. A. (1995). Structural equation models that are nonlinear in latent variables: A least squares estimator. In P. M. Marsdcn (Ed.), Sociological vietkodology (pp. 22S-251). Cambridge, MA: Blackwell. Bollert, K. A. (in press). An alternative 2S1.S estimator for latent variable equations. Psyckometrifui, Bollen, K. A., 8 c K m enta.J. (1986). Estimation o f simultaneous equation models with autore­ gressive or heteroscedastic disturbances. In J. Kmeula (Ed.), Elements nj ecorwmdricx (pp. 704-711). New York: Macmillan. Bowden, R. J., & Turkington, D. A. (1984). Instrumental variables. Cambridge: Cambridge University Press. Brown, R. L. (1990). The robustness o f 2SLS estimation o f a non-normally distributed con finnatory factor analysis model. Multivariate Behavioral Research, 25y 455-466. Ctideck, R. (1991). Noniterative factor analysis estimators, with algorithms for subset and instrumental variable selection. Journal o f Education/d Statistics, 16, S5-52. Eicker, F. (1965). Asymptotic normality and consistency of the least squares estimators for families o f linear regression. Annals of Mathematical Statistics, 34, 447-456.

8. HF.TEROSCEDASTI CITY IN l.ISREL MODEIiJ

241

O rb in g , D. M., & Ham ilton. J. C. (1994). T h e surprising viability o f a sim ple altern ate estim ation procedure for construction o f large-scale structural equation m easurem ent models. Structural E/fuation Modeling, /, 103-115. H agglund, G. (1982). Facior analysis by instrum ental variables. Psychometrika, 47, 209-222. Ilagglund, G. (1983). Factor analysis by instrumental method*: A Monte Carlo study of some estimation procedures. (Rep. No. 80 2). Sweden: University o f Uppsala, D epartm ent o f Statistics. H o r n , S. D . . l l o m , R . A . , 8 c D u n c a n , D . B . ( 1 9 7 5 ) . E s t i m a t i n g h e t c io s c e d a s t ic v a r ia n c e s in l i n e a r m o d e l s . Journal of the Ametican Statistical Association, 70, 380-385. Jennrich, R. I. (1987). T ableau algorithm s for factor analysis by instrum ental variable m ethod. Psychometrika, 52, 469-476. Joieskog, K. G. (1983). Factor analysis as an error-in-variables m odel. In H . W ainer Sc S. Messick (Eds.), Principles of modern psychological measurement (pp. 185-196). Hillsdale, NJ: Lawrence Erlbaum Associates. Joreskog, FL C., & S6rlx>m, D. (1986). IJSREL VI: Analysis of linear structural relationship by maximum likelihood, instrumentai variables, and least stfuares methods [com puter program ]. Mooresville, IN: Scientific Software. Lance, C. E., Cornw ell,J. M., & Mulaik, S. A. (1988). Limited inform ation param eter estimates for laten t o r mixed manifest and la ten t variable models. Multivariate Ikhavioral Research, 23, 155-167. Lukashov, A. (1994). A Monte ( M o study of the IV estimator in factor amdysis. U npublished m aster’s thesis, D epartm ent o f Sociology, University of N orth C arolina, Chapel Hill. M acKinnon, J. G., Sc White, H . (1985). Some betei oskedasiicily-consistenf covariance m atrix estimators with improved finite sam ple properties. Journal of Econometrics, 29, 305-325. Madanskv, A. (1964). Instrum ental variables in factor analysis. Psychometrika, 29, 105-113. Satorra, A., &• Bentler, P. M. (1991). G oodn ess-of-fit test u n d er IV estim ation: Asymptotic, robustness of a NT test statistic. In R. Gutierrez 8 c M. J. V alderrana (Eds.), Applied stochastic, models and data analysis (pp. 555-566). Singapore: W orld Scientific. White, H. (1980). A hetcroskedasticicy-consistent covariance matrix and a direct test for heteroskcdasticity. Fconometrir.a, 48, 721-746. W hite, H. (1982). Instrum ental variables regression with in d ep en d en t observations. Fcvnttmetrica, 50, 483-499.

This page intentionally left blank

C H A P T E R

N I N E

Full Information Estimation in the Presence of Incomplete Data James L. Arbuckle Temple University

Most multivariate methods require com plete data, but most multivariate data contain missing values. This problem is usually dealt with by fixing the data in some way so that the data can be analyzed by m ethods that were designed for complete data. T he most commonly used techniques for treating missing data are ad hoc procedures that attem pt to make the best o f a bad situation in ways that are seemingly plausible but have no theo­ retical rationale. A theory-based approach to the treatm ent of missing data u n d er the assumption of multivariate normality, based on the direct maxi­ mization of the likelihood of the observed data, has long been known. The theoretical advantages of this m ethod arc widely recognized, and its appli­ cability in principle to structural modeling has been noted. Unfortunately, theory has not had much influence on practice in the treatm ent of missing data. In part, the underutilization o f maximum likelihood (ML) estimation in the presence of missing data may be due to the unavailability o f the m ethod as a standard option in packaged data analysis programs. T here may also exist a (mistaken) belief that the benefits o f using ML rather than conventional missing data techniques will, in practice, be small. Recently, ML estimation for incom plete data has begun to appear as an option in structural modeling programs, giving new relevance to the theoretical work that dem onstrates die advantages of this approach. This chapter has two purposes: (a) to dem onstrate that ML estimation is a practical m ethod that can be used on a routine basis, and (b) to give a

243

244

ARBUCKLE

rough indication of how much one can expect lo benefit from its use. The chapter is organized as follow's: 1. Current practice in the treatm ent of missing data is summarized. 2. ML estimation is described and its advantages stated. 3. Com putational details are given for ML estimation and for some com peting methods. 4. Two simulations are reported, com paring the perform ance o f ML to the perform ance of com peting missing data techniques. T he first simulation dem onstrates the efficiency of ML estimates. T he second dem onstrates ihe reduced bias of ML estimates when values that are missing fail to be missing ‘'completely at random .” 5. A simple “how to” example is presented to show how a ML analysis wilh incom plete data differs in practice from an analysis with com­ plete data. 6. A second example shows how it can pay to design a study so thai some data values are intentionally missing. 7. An approach to the im putation of missing values is described. 8. T he com putational cost of ML is addressed.

CURRENT PRACTICE IN TH E TREATMENT O F MISSING DATA This chapter does not attem pt to review extant approaches to the treatm ent of missing data. Summaries are provided, for example, by Kim and Curry (1977), Little and Rubin (1989), and Brown (1994). T he purpose o f this chapter is to focus on current practice in the handling o f missing data, which appeal's to be dom inated by the m ethods sometim es known as pair­ wise deletion (PD) and listwise deletion (LD). Kim and Curry remarked on the widespread reliance on these two methods. Roth (1994) made the same observation after examining 132 analyses in 75 studies taken from two psychology journals, Journal of Applied Psychology and Personnel Psychology. Roth noted that it was often hard to tell w hether the data in an analysis contained any missing values. W hen it appeared that data values were miss­ ing, it was often unclear what technique was used to handle them. Guessing where necessary, Roth concluded that 86 of the 132 analyses were based on data that contained missing values. LD was used in 29 o f those analyses. PD was used 24 times. Mean substitution and regression imputation (see, e.g., Kim & Curry, 1977) were each used once. In the rem aining 31 cases, Roth could nol tell which missing data technique was used. Because of the

9. FULL INFORMATION ESTIMATION

245

popularity of LD and PD, this paper was prepared wilh users o f those two methods in mind.

ML ESTIMATION T he principles of MI. estimation with incom plete data are well known (An­ derson, 1957; Beale & Little, 1975; Hardey, 1958; Hardey & Hocking, 1971; Little, 1976; Rubin, 1974, 1976; Wilks, 1932). Finkbeincr (1979) used ML for common factor analysis with missing data. Allison (1987) and M uthen, Kaplan, and Hollis (1987) showed how the m ethod applies to structural modeling. Missing Data Mechanisms In order to state the advantages of ML estimation over PD and LD, it is necessary to discuss mechanisms by which missing data can arise. In par­ ticular, it is necessary to make use of Rubin’s (1976; Little & Rubin, 1989) distinction between data thal arc missing completely at random (MCAR) and data that are missing at random (MAR). Rubin's formal conditions for these two types of randomness are not easy to paraphrase. T he distinction can be illustrated with the following simple example, which draws upon examples employed by Allison (1987) and by Liltle and Rubin. Suppose a questionnaire contains two items: O ne item asks about years o f schooling, and the other asks about income. Suppose that everyone answers the schooling question, but that not everyone answers the incom e question. U nder these conditions, what is the mechanism that decides who answers the income question? Rubin considered several possibilities. First, suppose that whether a respondent answers the incom e question is independent of both income and education. Then the data are MCAR. Respondents who answer the income question are a random sample o f all respondents. Suppose, on the o th e r hand, that highly educated people are less likely (or m ore likely) than others to reveal their income, but that am ong those with the same level of education, the probability of reporting incom e is unrelated to income. T hen the data no longer qualify as MCAR, but they are still MAR. A third possibility to consider is that, even am ong people wilh the same level of education, high incom e people may be less likely (or more likely) to report their income. T hen the data are not even MAR. The MAR condition is weaker than the MCAR condition. That is, data that are MCAR are also MAR. Some confusion over die use of die terms can result from the fact that, as M uthen ct al. (1987) observed, MCAR is what people usually m ean by the term missing at random. Many people

246

ARBUCKLE

would say thal data that are MAR (but not MCAR) are not missing at random in the everyday sense. T he advantages of MI. relative to PD and LD can now be stated. PD and LD estimates are consistent, although not efficient, provided that the data arc MCAR. With data that are not MCAR but only MAR, PD and LD estimates can be biased. ML estimates, on Ihe other hand, are consistent and efficient un d er the weaker condition that the data be MAR. Further­ more, M uthen et al. (1987) and Little and Rubin (1989) suggest that the use of ML will reduce bias even when the MAR assumption is not strictly satisfied. Finally, to add to the list of PD ’s shortcomings, PD does not provide a means to obtain estimated standard errors or a m ethod for testing hypotheses. Implementations of ML in Structural Modeling Allison (1987) and M uthen et al. (1987) proposed a m ethod by which the LISREL program can be used to obtain maximum likelihood estimates with missing data. Bentler (1990) showed how to use the same technique with EQS (Bender, 1989). Werts, Rock, and Grandy (1979) described a similar m ethod for use in confinnatoiy factor analysis. T he application of the m ethod of Allison (1987) and M uthcn et al. (1987) is impeded by the fact that in practice it can only be vised when there are not many distinct patterns of missing data. The m ethod is im­ practical in the m ore frequent situation where each pattern o f missing values occurs only once or a handful of times. Situations can indeed arise in which only a few distinct patterns of missing data occur. Both Allison and M uthen et al. described situations where only a few missing data patterns would be expected. O n the oth er hand, this is not the most prevalent type of missing data problem. It is also fair to say that using the Allison and M uthen et al. m ethod requires an exceptionally high level of expertise, and this has no doubt been a limiting factor in its use. At present, ML estimation with missing data is available in at least two structural modeling programs, Amos (Arbuckle, 1995) and Mx (Neale, 1994). These program s do not place a limit on the num ber of missing data patterns, an d do not require the user of the programs to take elaborate steps to accom modate missing data.

COMPUTATIONAL DETAILS ML estimation for structural modeling under the assumption o f multivari­ ate normality is described here. Com putational details for LD and PD, as im plem ented in the simulations to be reported later, are also given. A

247

9. FIII.I. INFORMATION ESTIMATION TABLE 9.1 Data for Seven Cases Ineluding T hree Variables Case

X

Y

/.

1 2 3 4 5 6 7

13 14 15 ir. 17

23 22

21 17 II



18 17 20 20







12 8 15

small, artificial data set wilh missing data is used to explain all th ree m eth­ ods. In this data set there are th ree variables, X, Y, an d Z. There are seven cases. T he data are shown in Table 9.1. M axim um Likelihood Estim ation K inkbeiner (1979) gave com putational details for m axim um likelihood es­ tim ation in confirmatory' factory analysis with incom plete data. T h e g ener­ alization to structural m odeling is straightforw ard. Let n be the n u m b er o f variables an d .V the num ber of eases. In the num erical exam ple given, n = 3 and N = 7. Let n. be the nu m b er o f obseived d ata values for case i, an d let x ; be the vector of length nf th a t contains those observed values. In the num erical exam ple, nt = n 2 = ny - 3, n 3 = nh = n 7 = 2, and = x 'ix'3 = x '4= x 'a -

[13 [14 [15 [16 [17

x /6 = [ 2 0

23 21) 22 17] 11] 18] 17 12] 8]

xS = [20 15] Let the n observed variables have the population m ean vector n an d covariance m atrix £. In the num erical exam ple,

and £ = o„ Let ft, and X, be the population mean vector and covariance m atrix for the variables that are observed for case i. Each ji, can be obtained by deletin g

248

ARBUCKLE

elem ents of pi, an d each £, can be obtained by deleting rows and columns o f I . In the num erical exam ple, m = n? = Ms = j», Hi = Lui /tv], ni = [pi fit], and n! = n ' = \M2 fi 3 ), £ 1 = £ 2 = If, - 2, and °ll

a n

a ,*

a 2i

cr.i2

, x 4 = 0 ,1

O33

anc! X(j = L, =

a 2i, 0. Large values o f complexity indicate a high interaction between the variables, and low degree of complexity represents less in teractio n between the variables. T hus, using the idea o f the maximal inform ational covariance com plex­ ity m easure C ,( I ) in (5) along with the additivity property o f complexity of an overall system S given in Equation 1, Bozdogan introduces and develops a very general inform ational com plexity (ICOMP) criterion to evaluate nearly any class o f statistical models. For exam ple, for a general multivariate linear o r n o n lin ear structural m odel defined by Statistical m odel _

signal

+

noise

: v— — / \— - / determ inistic + random com ponent com ponent,

(6)

ICOMP is designed to estim ate a loss function: Loss = Lack of Fit + la c k o f Parsimony + Profusion of Com plexity (7) in two ways, using the additivity property o f inform ation theory an d the developm ents in Rissanen (1976) in his Final Estimation C riterion (FEC)

306

WILLIAMS, BOZDOGAN, AIMAN-SMITH

in estim ation an d mode! identification problem s, as well as A kaike's (1973) I n fo r m a tio n C rite r io n (A IC ), a n d its an a ly tic a l e x te n s io n s given in B ozdogan (1987). A pproach 1.

B ozdogan uses the covariance m atrix p ro p ertie s o f the p a ra m e te r estim ates o f a m odel starlin g from th e ir finite sam pling distributions. U sing th e jo in in g axiom o f com ­ plexity in (1) o n th e estim ated covariance m atrices, and after som e work ICO M P is d efin ed as

ICOM P (O verall M odel) = -2logL(& ) + 2 [C,(C ov(G)) + C|(Cov(£))]

(8)

't r ± (0)1 [ tr t(£ ) = -21ogL{&) + qlog --------------logl£(G )l + n l o g ---------- - log l£(e)l,

w here, C t d en o tes th e m aximal in fo rm atio n com plexity o f a covariance m atrix £ o f a m ultivariate no rm al distrib u tion given in E q uation 5. W e in te rp re t this first ap p ro ach to ICO M P as follows. • T h e first term in ICOM P in E quation 8, m easures th e lack o f fit (i.e., inference uncertainty), • T h e second term m easures the com plexity o f ih e covariancc m atrix o f the p aram eter estim ates o f a m odel, an d • T h e th ird term m easures the com plexity o f th e covariance m atrix of the m odel residuals. T h erefo re using ICOM P with this ap p ro ach , o n e can treat th e erro rs to be correlated, because in general, th e re does n o t a p p e a r to be any easy way to in clu de d ep en d en ce. As the param eters o f the m odel increases, so will th e interactio n benvecn them increase. As a result, th e second term o f ICOM P will increase an d will d o m in a te th e third term . O n th e o th e r h a n d , as th e errors becom e m ore co rrelated , so will th e interactions am ong them be larg er an d the com plexity o f th e covariance m atrix o f the errors increase. C onsequently, il will d o m in ate th e second term . H en ce, the sum o f ihe last two term s in ICOM P will autom atically take in to acco u n t the n u m b e r o f p aram eters an d the d eg rees o f freedom by “trading-off' with th e first term , w hich m easures the lack o f fit. So, co m p arin g with AIC, rhe sum o f th e last two term s in ICO M P will replace th e penalty term , two tim es th e estim ated param eters in th e m odel o f AIC, an d m ore. N ote that lack o f parsim ony an d profusion o f com plexity is taken inlo acco u n t by ihe sum o f th e com plexities, an d they arc n o t necessarily w eighted equally d u e to th e differen t dim ensionalities o f an d £.

10. INFF.RF.NOE PROBI.F.MS M T U FQUVAt.KNT MODFI.S

307

To cover ihe situation where the random e rro r must be m odeled as an in d ep en d en t a n d /o r a d ep e n d en t sequence, th e following second approach is proposed. A pproach 2.

Bozdogan uses the estim ated inverse-Fisher infonnation m atrix (IFIM) o f the m odel by considering the entire param eter space of the m odel. In this case, ICOMP is defined as

ICOMP(Overall M odel) = -2logL (& + 2 C ,(5 I)

(9)

C ,(^-') = s/2 1 o g[trace(£-')/s] - l/2 1 o g l£

(10)

where

and w here, s = dim (*“’) = rank (&“‘). For m ore on the Q m easure as a “scalar m easure” o f a nonsingular co­ variance matrix o f a multivariate norm al distribution, we refer the read er to the original work ofVan Em den (1971). T h e C| m easure also appears in Maklad and Nichols (1980, p. 82) with an incom plete discussion based on Van F.mden’s work in estim ation. However, we fu rth er n ote th a t these authors abandoned the C, m easure, and never used it in th eir problem (Bozdogan, 1990a). T he first com ponent of ICOMP in Equation 9 m easures the lack o f fit of the m odel, an d the second co m p o n en t m easures the com plexity of the estim ated IFIM, which gives a scalar m easure of the celebrated Cramer-Rao lower bound m atrix of the model. H ence: • ICOMP controls the risks o f both insufficient an d overparam eterized models; • It provides a criterion th a t has th e virtue o f judiciously balancing between lack of fit an d the m odel com plexity d ata adaptively; • ICOMP removes from the researcher any need to consider the pa­ ram eter dim ension o f a m odel explicitly an d adjusts itself autom atically for the sample size; • A m odel with m inim um ICOMP is chosen to be the best m odel am ong all possible com peting alternative models. T h e theoretical justification of this approach is that it com bines all three ingredients o f statistical m odeling in Equation 7 based on the jo in in g axiom of complexity of a system given in Equation 1. Also, it refines fu rth er the derivation o f AIC, and represents a com prom ise between AIC and Rissanen’s (1978, 1989) MDL, o r CM C o f Bozdogan (1987, p. 358). It show's that Akaike (1973), in obtaining his AIC, goes to the asymptotic

308

YVJI.UAMS, BOZDOGAN, A1MAN-SMITH

distribution o f the quadratic fo n n too quickly, which involves the Fisher inform ation m atrix (FIM) of the estim ated param eters. As is well known, th e sam pling distribution o f the ML estim ators is m ultivariate normally distributed with the covariance matrix being the IFIM. Instead o f im m ediately passing to the asymptotic, distribution o f the param eter estimates, an d if o n e uses the com plexity of th e finite sam ple estim ate o f IFIM of th e m ultivariate distribution, o n e obtains ICOMP given above in th e second approach. Indeed, if C, (Est. 1FIM) is divided by the num bers o f estim ated param eters across different com peting alternative models, one can obtain th e so-called the “magic n u m b er” o r the "penalty per param eter” in inform ation criteria, rath e r than fixing this at the critical value 2 as in AIC. In what follows, wc briefly develop ICOMP based on Bozdogan’s (1991) work in structural equation m odels (SEM). Following Joreskog & Sorbom (1989) in matrix notation, we define the full SFM by three equations: Structural Model: n = b n + r 4 (rxr) (rxr) (rx l) (ixs) (sxl)

+

5 (rx l)

M easurem ent .Model for y: y

=

Ay

( p x I)

+

tj

(pxr) (rx l)

s (p x l)

M easurem ent Model for x: x

=

A £. + 8 ' (qxs)(sxl) (q x l)

(q x l)

(13)

with th e usual assum ptions. T o show the closed form analytical expression o f ICOMP using the sec­ ond approach in (9), we let £(£)) denote the m axim um likelihood estim ator (MLE) o f the im plied covariance matrix £(©) o f the full SE m odel given by A

m

A

AAA

A

A

A

A

a (i - b )-1 < r ® r + v x i - b t ‘av+ ©c

A A

=

A A A

A

.



A

A

\ (i

A xO F ( I - B ') 1A ;

AAA

B)

' A A A

A

A S«J>A'X + 0 .

.(14)

Following the notation in M agnus (1988. p .169), M agnus an d N eudecker (1988, p. 319), we give (see Bozdogan, 1991) the estim ated invcrsc-Fisher inform ation (IFIM) for th e full SL model in the case of single sample defined by Est J~' - 5 “’ -

~

o

A

A ®

A A

2i>,p.q) f i ( 0) « i(fl)]

( 15)

309

10. INFERENCE PROBLEMS WITH EQUIVALENT MODELS

In Equation 15 Dp is a unique p -x l/2 p (p + l) duplication matrix which transforms v (I) into vcc(E). V cc(i) denotes the l/2 p (p + l)-v ccto r, which is obtained by elim inating all the supradiagonal d em en ts o f L. Vcc(£) vectorizes a matrix by stacking the columns of 2 one undernealh the other, and I>'p is the Moore-Penrose inverse of the duplication matrix Dp, and that D 'p = (D'DpDpT'Dp. Then, after some work (see Bozdogan, 1991, 1994b) inform ation com­ plexity ICOMP of the estimated IFIM lo r models wilh latent variable for single multivariate sample is given by ICOMP(SEM) - -21ogL(&) + slog[trace($-‘)/sJ - logl£-'l A

A

= n(p + q)log(2ft) + nloglEI + n trE 'S ; (la c k of fit) A

A

«-(£) *■l/2cr(L‘V

A

P *!

l / 2 ( t i i ) a i £ (a^f (Comp. olTFlM),

+ (p + q + 2) log ll! - (p + q) log(2),

(lb)

A

where tr(E) = trace (estin^ued model covariance matrix), which measures the total variation, and ILI = determinant, (estimated model covariance matrix) which measures the generalized variance. We note diat in ICOMP(SF.M), the effects of the param eter estimates and their accuracy are taken into account. Thus, counting and penalizing the num ber of param eters are eliminated in the model-fitting process and complexity is viewed not as the num ber of parameters in the model, but as the degree of interdependence am ong the com ponents of the model. As we pointed out, C |(5 _l) in Equation 16, measures both ihe lack of parsimony and the profusion of complexity (i.e., interdependencies am ong the param eter estimates) of the model. It also gives us a scalar measure of the celebrated Cramer-Rao lower bound matrix of the model lhat meas­ ures the accuracy of the param eter estimates. By defining complexity' in this way, ICOMP provides a more judicious penalty term than AIC and MDL (or CAIC). The lack of parsimony is automatically adjusted by C, if}'1) across the com peting alternative models as the param eter spaces o f these models are constrained in the model fitting process. It is because of this virtue of ICOMP that we use ju st the complexity part of IFIM to choose am ong the equivalent models, because equivalent models typically (but not necessarily) have ihe same num ber of param eters estimated within the model. Therefore, in equivalent models, ju st counting and penalizing the num ber of param eters docs not distinguish the models, because these often have the same integer value with each of the equivalent

310

WILLIAMS, BOZDOGAN, AIMAX-SM1TH

models. O ne should bare in mind lhat the struclure of equivalent models, or the structure of nonequivalent models, is not determ ined ju st by the num ber of its param eters only. In this sense, ju st counting and penalizing the num ber of param eters docs not have the provision of taking into account the interdependencies am ong the param eter estimates in the es­ timated implied model covariance matrix as we alter the paths, or change the signs of some of the param eters as we validate our substantive models. In oth er words, the usual parsimony fit indices, o r the noncentrality fit indices, and also AlG-type criteria, will all fall short for the estimation of the internal cohesion or description of the equivalent models. Therefore, the general principle is that for a given level o f accuracy, a sim pler or a m ore parsim onious model with m inim um complexity (i.e., in the sense of m inimum variance) is preferable to a m ore complex one am ong the equivalent models to choose a good model structure. To investigate the utility of the complexity approach to evaluate equiva­ lent models, in this chapter, we propose to use A

W

) = A

A

A

A

ti ( E ) + l / 2 tr ( 2 ? ) + ]/2 (tr2 ; )* + £ ______________________________

(ctj)2

izl _____

" (Comp, o f IFIM)

1 /2 (p + q )(p + q t 3)

+ (p + q + 2)loglI I - (p + q) log(2),

(17)

that is die second com ponent of the ICOMP(SEM) defined in Equation 16, because the lack of fit term will be equal (or almost equal) in equivalent models. Since C |(_l) is a “scalar measure" of the estimated inverse-Fisher inform ation matrix (IFIM), based on the results in Roger (1980, p. 174) and correcting the error, Bozdogan (1992) proposed a new 100(l-«)% approxim ate confidence interval (AC!) for the complexity o f IFIM o f the SF. model given by 1/2

(18)

If we let w be the width of the AC1, then, say, for different SF models we can com pute the 100(l-a)% ACI’s and minimize the width w to choose the best fitting SE model. This approach is vet another approach to choose am ong the com peting equivalent models. Exploiting the block diagonality of the IFIM in Equation 15 for the SE model in Equations 11-13, we c.an also construct 100(1 -a)% ACI’s for the

10. INFERENCE PROBLEMS WITH EQUIVALENT MODELS

311

generalized variance of Cov (fi) and for the generalized variance of Cov (v£)), respectively. This will be useful in studying the stability of the param eter estimates especially in Monte Carlo experiments, because we know the true values of the mean vcctor n and the implied model covari­ ance matrix £.

ACKNOWLEDGMENTS Special thanks 10 Margaret L. Williams, Mark B. Gavin and Sherry L. Maga­ zine for their helpful comments on earlier drafts of this chapter, and to Julia HufTer and Carol Wood for their word processing support during the preparation of this chapter.

REFERENCES Abdel-Halim, A. A. (1978). Employee affective response to organizational stress: M oderating effects of jo b characteristics. Personnel Psychology, 31, 561-570. Adams. J. S. (1963). Toward an understanding o f inequity. Journal o f Abnormal and Social Psychology. 67, 422-436. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B. X. Petrov & F. Csaki (Eds.), Second Inter national Symposium on Information Theory (pp. 267-281). Budapest: Academiai Kiado. A nderson, J., 8c G erbing, D. (1988). Structural equation m odeling in practice: A review and recom m ended two-step approach. Psychologiccd Bulletin, 103, 411-423. Beehr, T., Walsh, J., 8c T aber, T. (1976). Relationship o f stress lo individually an d organiza­ tionally valued slates: H igher o rd e r needs as m oderator. Journal o f Applied Psychology* 61, 41-47. Bender, P. (1986). Structural m odeling and Psychometrika: An historical perspective on growth and achievem ents. Psych&melrika, 51, 35-51. Bluedorn, A. C. (1982). A unified model o f turnover from organizations. Human Relation.'«, 35. 135-153. Bollen, K., 8c l£)ng> S. (1993). testing structural equation models. Beverly Hills, CA-. Sage. Bowers. D. G.. & Seashore, S. E. (1966). P redicting organizational effectiveness with a fourfactor theory o f leadership. Administrative Science Quarterly, 11, 238-263. Bozdogan, H . (1987). Model selection and Akaike’s Inform ation C riterion (AIC): T he general theory and its analytical extensions [Special issue]. Psyr.hotneirik/i, 52(3), 345-370. Bozdogan, II. (1988). ICOMP: A new model selection criterion. In 1flans II. Bock (Ed.), Classification and related method', of data analysis (pp. 599-608). A m sterdam : X orth-H olland. Bozdogau, 11. ( 1990a. Decem ber). Multisample cluster analysis of the common principal component model in K groups using an entropic statistical complexity criUvion. P aper presented al the International Symposium on Theory and Practice o f Classification, l’uschino, Soviet U nion. Bozdogan, H. (1990b). O n die inform ation-based m easure of covariancc complexity an d it.s application to the evaluation o f multivariate linear models. Communication* in Statistics, Theory and Methods, 19{I), 221-278.

312

WILLIAMS, BOZDOGAN. AIMAN-SMITH

Bozdogan, H, (1991, Ju n e ). A new information theoretic measure of complexity index for model evaluation in general structural equation models with latent variables. P aper presented at th e Symposium on Model Selection in Covariance Structures at th e Jo in t M eetings o f Psy­ chom etric Society and the Classification Society. Rutgers University, Newark, NJ. Bozdogan, H. (1992). A nern approximate amfutenre interval (A Cl) n il m m / for model scle.ctum. Working paper. Bozdogan, H. (1993). Choosing the num ber o f co m p o n en t clusters in th e mixmre-model using a new inform ational complexity criterion o f th e inverse-Fisher inform ation matrix. In O . O pitz, B. 1.arisen, fe R. Klar (Eds.), Studies in class ifimtiim, data analysis, and knowletlgc organization (pp. 40-54). Heidelberg: Springer Veilag. Bozdogan. H. (1994a, Ju n e ). Hayesion factor analysis model and choosing the n vm lw oj factors using a nexv informational complexity criterion. Paper presented at th e Second A nnual Meeting of the International Soceity for Bayesian Analysis, Alicante, Spain. Bozdogan, H . (1994b). Mixture-model cluster analysis using m odel selection criteria and a new inform ational m easure o f complexity. In H. Bozdogan (Ed.), Multivariate statistical modeling. Vol. 2. Proceedings o f the I'lrst US/Japan Conference on the Frontiers of Statistical Modeling: An informational approach (pp. 69-113). D ordrecht, N etherlands: Kluwer. B o/dogan, H. (1994c, A ugust). Subset selection o f jrre.dit.ton in Bayesian regression model using a new informational complexity criterion. P aper presented at the ASA M eeting on Bayesian Statistics and Inform ation Theory’, T oron to , Canada. Bozdogan. H. (in prcparation-a). Informational complexity and multivariate statistical modeling. Bozdogan. H. (in prcparation-b). Statistical modeling and model evaluation: A new informational approach. Cliff) N. (1985). Some cautions concerning the application o f causal m odeling m ethods. Multivariate Behavioral Research, 18, 115-126. Cudek, R. (1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin, 105, 317-327. D uncan, O. D. (1975). Introduction lo structural equation models. New York: Academic. G raen, G., Novak, M., $c Som rnerkam p, P. (1982). T h e effects o f leader-m em ber exchange and jo b design on productivity and attachm ent: t esting a dual attachm ent m odel. Organ­ izational Behavior and Hum/m Performance, 30, 109-131. House. R. j., Sc Rizzo, J. R. (1972). Role conflict and role ambiguity as critical variables in a m odel o f organizational behavior. Organizational Behavior and Human Performance, /, 467505. House, R. )., Schuler, R. S., & l.cvauoui, E. (1983). Role conflict an d role am biguity scales: Reality or artifacts? journal o f Applied Psychology, 68, 334—337. James, I.. R., G em , M. J., H ater, J. J., Sc Coiav, K. P.. (1978). Correlates of psychological influence: An illustration of the psychological clim ate approach to work environm ent perceptions. Personnel /Psychology, 32, 5t>3-588. James, L. R., Mulaik, S.. Sc Brett, J . (1982). Causal analysis. Beverly Hills, CA: Sage. Jones, A. P., Sc James, L. R. (1979). Psychological climate: Dimensions an d relationships of individual and aggregate work environm en t perceptions. Organizational Behavior and H u­ man Performance, 23, 201-250. Joreskog, K. G., Sc Sorbom, D. (1989). USRl'.L 7: A guide lo ihe program and aftpUcalions (2nd cd.). Chicago: Statistical Package for the Social Sciences, Inc./M cG raw Hill. Lance, C. E. (1991). Evaluation o f a structural model relatin g jo b satisfaction, organizational com m itm ent, and precursors lo voluntary turnover. Multivariate Behaviorrd Research, 26, 137-162. Lee, S., &■ H ershberger, S. (1990). A sim ple rule for generating equivalent models in covan ance structure modeling. Multivariate Behavioral Research* 25, 313-334. MacCallum, R. (1986). Specification searches in covariance structure m odeling. Psychological Bulletin, If If), 107-120.

10. INFERENCE PROBLEMS W1TII EQUIVALENT MODELS

313

MacCallum, R., W cgcncr, D., Uc.hmo, B., & Fabrigar, L. (1993). T h e problem o f equivalent models in applications of covariance structure analysis. Psychological bulletin, / 14, 185-199. Magnus, J. R. (1988). lAnettr structures. New York: Oxford University Press. Magnus, J. R., 8c Neudecker, H. (1988). Matrix differential calculus with applications in statistics and econometrics. New York: Wiley. Maklad, M. S., 8c Nichols, T. (1980). A new approach to m odel structure discrim ination. IEKE Transactions on Systems, Man, and Cybernetics, SMC-10 (2), 78-64. M athicu, J. (1991). A cross-lcvcl non recursive m odel o f the antecedents o f organizational com m itm ent and satisfaction. Journal of Applied Psychology, 76, 607- 618. M athieu, J.. & Farr, J. (1991). F urther evidence to r th e discrim inant validity o f measures o f organizational com m itm ent, jo b involvement, an d jo b satisfaction. Joumrd of Applied Psy­ chology, 76. 123-133. Mislevy, P. (1986). R ecent developm ents in the factor analysis o f categorical variables.yoarHrt/ oj Educational Statistics, / 1, 3-31. Moos, R. H. (1980). Croup environment scale. Palo Alto. CA: Consulting Psychologists Press. Mulaik, S., James, L., Van Alstiiie,J., B ennett, N., Lind, S., & Stilwell. C. (1989). An evaluation o f goodness of fit indices for structural equation m odels. Psychological Bulletin, 105,430-445. Murray, II. J. (1938). Explorations' in personality. New York: O xford University Press. M uthen, B., 8c Kaplan, D. (1992). A com parison o f som e m ethodologies lor the factor analysis of non-norm al Likert variables: A note on the size o f th e m odel. British Journal of Mathe­ matical and Statistical Psychology, 45, 19-30. P ar/en, F.. (1983). T im e series identification by estim ating inform ation. In S. K arlin, T. Amcmiya, & I.. A. G oodm an, Studies in econometrics, lime series, and multivariate statistics (pp. 279-298). New York: Academic Press. Podsakoir, P. M., Niehofl, B. P., MacKenzie, S. B., & Williams, M. L. (1993). Do substitutes to r leadership really substitute for leadership? An em pirical exam ination o f Kerr an d Jcrm icr's situational leadership model. Organizational Behavior and Human Decision Processes, 54, 1-44. Price, J. I... 8c Blue.rlorn, A. C. (1980). Test o f a causal m odel o f tuniovet from organi/ation.s. In D. Dunkcrley 8c G. Sal am an (Eds.), Intemalumtd yearbook o f urganizalvmal stiulies 1979. L ondon: Routeldge & Kegan Paul. Rissanen, J , (1976). M im nax entropy estimation o f m odels for vector processes. In R. K. Me lira 8c D. G. Lainiotis (Eds.), System Identification (pp. 97-119). New York: Academic Press. Rissanen, J . (1978). M odeling bv shortest data description. Aulomotica, 14, 465—471. Rissanen, J. (1989). Stochastic complexity in statistical inr/uby. Tcancck, NJ: W orld Scientific Publishing Company. Rizzo, J . R., H ouse, R. J., 8c Lirtzm an, S. D. (1970). Role conflict an d ambiguity in com plex organizations. Administrative Science Quarterly, 15, 150-163. Roger, G. S. (1980). Matrix derivatives. New York: Marcel Dekker. Schriesheim , C. A. (1978). Development, validation, and application of neiv leader behavior and expectancy research instruments. U npublished doctoral dissertation. T he O hio State Univer­ sity. Columbus. Sims, H. R., Szilagyi, A. D., 8c Keller. P. T. (1976). I h e m easurem ent o f jo b characteristics. Academy o f Management Journal, 19, 195-212. Steers. R. M., 8c Braurtstein, D. N. (1976). A behaviorally based m easure of manifest needs in work settings. Journal of Vocational Behavior, 9, 251-266. Stelzl, I. (1986). Changing a causal hypothesis w ithout changing th e fit: Some rules for generating equivalent path models. Multivariate Behavioral Research, 21, 309-331. Stone, E. F. (1974). The moderating effect o f work-related values on the job scofte-job satisfaction relationship. U npublished doctoral dissertation, University o f California at Irvine.

314

WILLIAMS, BOZDOGAN, AIMAN-SM1TH

Van Emden, M. H. (1971). An analysis oj complexity. A msterdam: M athematical C entre Tracts 35. W'anous, J. P. (1973). Effects of a realistic jo b preview on jo b acceptance, jo b attitudes, and jo b survival. Journal o f Applied Psychology, 58, 327-332. W atanabe, S. {1985). Pattern recognition: Human ami mechanical. New'York: Wiley. Williams, L .J ., Gavin. M. B., Sc Williams, M. L. (1994. A pril). Controlling for method effects in employe altitude research. P aper presented at the 1994 annual m eeting o f the Society for Industrial and O rganizational Psychology. Williams, L. J., & Hazer, J. T. (1986). A ntecedents an d consequences o f satisfaction and com m itm ent in turnover models: A reanalvsis using latent variable structural equation m ethods. Journal of Applied Psychology, 71, 219-231. Williams, L. & H olahan, P. (1994). Parsimony-based fit indices for m ultiplc-indicator m od­ els: Do the\* work? Structural Equation Modeling, /, 161-189.

C H A P T E R

E L E V E N

An Evaluation of Incremental Fit Indices: A Clarification of Mathematical and Empirical Properties H e rb e rt W. M arsh University of Western Sydney, Macarthur

John R. Balia University of Sydney Kit-Tai H a u The Chinese University of Hong Kong

W hen evaluating the goodness of fit of structural equations models (SEMs), researchers rely in part on subjcctivc indices of fit as well as a variety of other characteristics (for m ore extensive overviews of assessing goodness of fit, see Bentler, 1990; Ben tier Sc Bonett, 1980; Bollen, 1989b; Browne Sc Cudcck, 1989; Cudcck & Browne, 1983; Gerbing Sc Anderson, 1993; Marsh, Balia, & McDonald, 1988; McDonald & Marsh, 1990; Tanaka, 1993). T here are, however, a plethora of different indices with no consensus as to which are the best. Adding to this confusion, major statistical packages (e.g., LISREL8, EQS, CALIS: SAS) tend to be overinclusive in their default presentation of indices, automatically including some that are known to have undesirable properties. Because there is no “best” index, researchers are advised to use a variety of qualitatively different indices from different families of measures (Bollen Sc Long, 1993; Tanaka, 1993). W hereas there is no broadly agreed upon typology of indices, the family o f increm ental fit indices is one o f ihe most popular. Increm ental fit indices appeal to researchers wanting an apparently straight-foi'ward evaluation o f ihe ability of a model to fit. observed data that varies along a 0 to 1 scale and is easily understood. Bentler and Bonett (1980) popularized this approach and developed an index lhat seem ed to satisfy these requirem ents. Subsequent research, however, indicated that ihe interpretation of increm ental fit indices was more complicated than initially anticipated. T h e num ber of increm ental indices proliferated, partly 315

316

MARSH, BALLA, HAU

iii response 10 real or apparent limitations in existing increm ental fit in­ dices, and more or less successful attem pts to incorporate o th er desirable properties that evolved from this particularly active area o f research. The usefulness of increm ental fit indices was further hindered by the lack of standardization in the names assigned to the same index, including the independent rediscovery or sim ultaneous discovery of the same index by different researchers. The purposes o f this chapter are to more clearly delineate the increm ental fit indices in popular use and to evaluate these indices in relation to desirable criteria for these indices. O ur task has been simplified, perhaps, hyjoreskog and Sorbom ’s (1993) decision to include five increm ental fit indices as the default o u tp u t in their LISREL8 program. Because o f I.ISREL’s historical dom inance in confirmatory factor analysis and structural equation modeling, we have used Joreskog and Sorbom ’s selection as an operational definition o f popu­ lar usage. There is, however, considerable similarity in the increm ental indices included in the more popular of the growing num ber of packages that arc challenging LISREL’s dom inance (e.g., EQS). For reasons that we discuss later in this chapter, we have added two additional indices— actually variations of indices provided by L1SREL8. The seven incremental fit indices considered are defined in Table 11.1. In order to provide further clarification, key references to each index are presented along with most of the alternative labels that have been used in the literature. Wc begin the chapter with a brief historical overview of increm ental fit indices.

INCREMENTAL FIT INDICES Bender and Bonett’s (1980) Indices T he use of incremental fit indices was popularized by Bentler and Bonett (1980). Based in part on earlier work by Tucker and Lewis (1973), diey argued that il was desirable to assess goodness o f fit along a 0 to 1 con­ tinuum in which the zcro-point reflects a baseline, or worst possible fit, and 1 reflects an optim um fit. They suggested that a null model in which all the measured variables were posited to be mutually uncorrelated, pro­ vided an appropriate basis for defining the zero-point. O th er researchers have argued that other m ore realistic models may provide a better baseline against which to evaluate target models (Marsh & Balia, 1994; Sobel & B ohm stedt, 1986); these alternative baseline models are typically idiosyn­ cratic to a particular application and have not been widely accepted. Bentler and Bonett proposed two increm ental fit indices: the NonN orm ed Fit Index (NNFI) based on the work by Tucker and Lewis, and their new N orm cd Fit Index (NFI). The major distinction between these indices was that the NFI was strictly nonned to fall on a 0 to 1 continuum ,

TABLE 11.1 Seven Incremental Fit Indices: Alternative Labels and Definitions

Other Labels

Index N onnorm ed Fit Index [NNFI] (Tucker & Lewis, 1973; Bentler & Bonnet, 1980} Normed Tucker Lewis Index [NT LI] (this volume) Normcd Fit Index [NFI] (Bender & Bonnett. 1980) Relative Fit Index (RFI) (Bollen, 1986) Incremental Fit Index (IF1) (Bollen, 1989a) Relative Noncentrality Index (RX1) (McDonald & Marsh, 1090) Comparative Fit Index (CFI) Bender (1990)

F o f CHISQ

F' o f FF

F ' o f NCP

TLI; r / d j - 12; rho; p; o r p2; BEBOUC

[ x V 'f t - x\//(£( I,) - 1„),

(2)

where I4 and I, are values of a “stand alone” (nonincrem ental) index for an appropriately defined baseline model (typically a null model that as­ sumes that all measured variables arc uncorrelaled) and the target model

11. FIT INDICES FOR STRUCTURAL EQUATION MODELS

319

respectively, and E(I,) is the expected value for the stand alone index for a "true" target m odel that is corrccdy specified so that there is no misspecification. Variations of a and b are appropriate for stand alone indices in which poorer fits are reflected by larger values (e.g., x3) . and by smaller values, respectively. Thus, for example, they evaluated the xVdf-12, X2/d f: II, xM2, 1 that are referred to here as the NNFI, Relative Fit Index (RFI), Increm ental Fit Index (IFI), and NFI, respectively (see also Marsh & Balia, 1986). Marsh ct al. evaluated a wide variety of Type-1 and Type-2 increm ental fit indices. In their Monte Carlo study, they considered 7 different sample sizes and a variety of models based on real and simulated data. T heir results suggested that increm ental Type-1 indices were norm ed in the sample but had values lhat were systematically related to sample size. In contrast, Type-2 indices were not norm ed and were not systematically re­ lated to sample size. These conclusions, however, were prem ature. In a trivial sense they were wrong in that some stand alone indices have expected values of 0 so that ihe Type-1 and Type-2 forms are equivalent. McDonald and Marsh (1990) subsequently evaluated the mathematical properties of a num ber of these fit indices by expressing them in term s of the population noncentrality' param eter for a target model (8,), which is independent of N and can be estimated from sample data by dt: d, = ( t f - d Q / M

(3)

O f particular relevance, they dem onstrated that FFI2 and %T2 are m athe­ matically equivalent (i.e., substituting Xs for FF = X? x ,Y into Equation 2 results in the same value) and should vary with N according to their mathematical form. This effect was not found by Marsh et al. (1988), due in part lo the small num ber of rcplicatcs of each sample size and the nature of the data that were considered. However, McDonald and Marsh (1990; see also Marsh, 1995) dem onstrated thai FFI2 and which is called IFI here, should be biased in finite samples. This is shown by the following expression where E(x2) for a true target m odel is the degrees o f freedom for ihe model (df,): IFI = (X 'i-tf)/(X ‘i - d f ) = L(Ado + df0) - (Ad, + df,)]/[(AfcU + dfD) - df,] = [(do - d,) + (d£> - df,)/A )]/ Id, + (df„ - df,)/A'J = 1 - '[d/tdfl + (d£, - df,)/A]j

w

McDonald and Marsh (1990) noted that “it may be verified lhat this quan­ tity approaches its asymptote from above, overestimating it in small sam­ ples" (p. 2 5 0 ). In s p e c tio n o f E q u a tio n 4 also re v e a le d th a t th e ovcrestimation tends to disappear as misspecification (5,) approaches zero.

320

MARSH, BAJLLA, 1IAU

T he Type-1 and Type-2 forms o f increm ental fit indices proposed by Marsh et al. (1988) were a heuristic basis for generating a large n u m b er o f different increm ental indices, an d m any o f the indices have subsequently been proposed by o th e r re se a rc h e s. I lowever, subsequent research— par­ ticularly that by M cDonald and Marsh (1990)— indicated that n o t all Type-2 indices are unbiased, thus und erm in in g som e o f the usefulness of the Type-1 and Type-2 distinction. Indices Proposed by Bollen (1 9 8 6 ,1989a, 1989b) Bollen (1986, 1989b, 1990; Bollen & l.ong, 1993) has em phasized the use­ fulness of indices whose estim ated values arc unrelated o r only weakly re­ lated to sample size and that provide a correction for m odel complexity. Bollen (1986) suggested lhat the N N F I was a function of sam ple size and proposed th e R F 1 to correct this problem . M cDonald an d M arsh (1990), however, showed mathem atically that the N N F I "should not exhibit any systematic relation to sample size” (p. 249), a conclusion that was consistent with M onte Carlo results (e.g., A nderson & G erbing, 1984; B ender, 1990; Bollen, 1989a, 1989b; Marsh, Balia, &■ M cDonald. 1988). Marsh c ta l. (1988, A ppendix 1; sec also Marsh & Balia, 1986) also dem onstrated lh a t the R F i was a variation o f their general form o f Type-1 increm ental indices based on the X2/ d f ratio. Values for R F I were shown to vary systematically with .V in M onte Carlo research (Marsh et al., 1988) and by its m athem atical form (McDonald & M arsh, 1990). T he usefulness of R F I is u n d erm in ed by the fact that it is biased by N (i.e., A ' is systematically related to the m eans of its sam pling distribution). Bollen (1989a, 1989b, 1990) subsequently proposed the I F I (which he callcd A2; see Tabic 11.1), a new fit index that was in ten d ed to correct this problem o f sam ple size d ependency and to provide a correction for degrees of freedom . A lthough derived independently by Bollen (see Bollen, 1990, footnote 1), Bollen actually rediscovered the Type-2 increm ental fit index for the X' that had been proposed several years earlier by Marsh and Balia (1986; Marsh ct al., 1988). W hereas the I F I index should probably be attributed lo Marsh and colleagues rath er than lo Bollen, a subsequent evaluation of the index by M cDonald an d Marsh (1990) suggested that the I F I is flawed and should not be considered further. If these claims are substantiated, th e n "credit” for a possibly discredited index may be o f dubious value. McDonald and Marsh (1990; M arsh, 1995; sec also E quation 4) dem ­ onstrated that I F I should be biased by N according lo its m athem atical form and that the size o f ihe bias should approach zero as misspecification approaches zero. M ore specifically, I F I should ap p ro ach its asymptotic value from above (i.e., bccom e systematically less positively biased for in-

H. FIT INDICES FOR STRUCTURAL EQUATION MODELS

321

crcasing Ns). but the size of the bias should decrease as the degree of misspecificalion approached zero. Bollen (1989a) and Bentlcr (1990) both dem onstrated that the index was unbiased in Monte Carlo studies of a correctly specified (“true") model. However, because the index is not biased when there is no misspecification, these were not critical tests o f If I. Bentler also evaluated a slightly misspccified model and reported that IFI was relatively unrelated to N. Marsh (1995), however, noted that whereas the differences reported by Bender were small, there appeared to be a system­ atic pattern of effects that was consistent with McDonald and M arsh’s suggestions. Marsh (1995). expanding on observations by McDonald and Marsh (1990), also suggested that the nature of the penalty for model complexity in the IFI may be inappropriate. Hence, an o th er purpose of this chapter is to evaluate these characteristics of the IFI, using a Monte Carlo study more specifically designed to consider these features than previous research. Incremental Indiccs Based on Noncentrality The most recently developed increm ental fit indices are based on the non­ centrality param eter (8) from the noncentral chi-square distribution (e.g., Bentlcr, 1990; McDonald, 1989; McDonald & Marsh, 1990; Steiger & Lind, 1980) and its sample estimate (Equation 3). Bentler (1990) and McDonald and Marsh (1990) both emphasized that the noncentrality param eter re­ flects a natural measure of model misspecificaiion. Two features of this research particularly relevant to this chapter are the expression of incre­ mental fit indices in terms of the noncenlrality param eter and the devel­ opm ent of new increm ental fit indices based specifically on nonccntrality. Thus, for example, McDonald and Marsh (1990) derived expressions of NFI, NNFI, RFI, and IFI in terms of nonccntrality, df, and sample size. In this form, they dem onstrated that NNFI should be independent o f sample size, whereas values for the oth er three indices should vary systematically with sample size. They also dem onstrated that the critical difference be­ tween NNFI and NFI was not that the NFI is norm ed (i.e., is bounded by 0 and 1 in the sample) whereas NNFI was not. Rather, they argued that the more im portant distinction was that the NNFI is an unbiased estimate of a quantity that incorporates a correction for model complexity', whereas NFI is a biased estimate of a quantity that does not. (They also argued for the use of the label Tucker Lewis Index [TLIJ instead o f NNFI, but we have used the NNFI label here to be consistent with notation used in major SF.M statistical packages). McDonald and Marsh proposed RNI that pro­ vides an unbiased estimate of the asymptotic values estimated (wilh bias) by NFI and IFI. They concluded that researchers wanting to use a incre­ mental fit index should logically choose between the RNI and the NNFI.

322

MARSH, BAIJ.A. HAU

W orking independently from a sim ilar perspective, B entler (1990) also em phasized th e noncentrality param eter (in an article published in the same issue of Psychological Bulletin as the M cDonald an d M arsh study). Bentler (1990) initially proposed a new increm ental fit index identical to the RNI, but showed that sample estim ates o f this index were n o t strictly b o u n d ed by 0 and 1. F or this reason, he proposed a n o n n e d version o f RNI, called CFI, in which values falling outside o f th e 0-1 range are tru n cated so lhat CFI is strictly bounded by 0 and 1 in sam ples as well as in the population (see Table 11.1). Bentler then noted that. NFI, IFI. CFI, and RNI all have the same asymptotic limit, which differs from that o f NNFI an d RFI. C onsistent with conclusions by M cDonald and Marsh (1990), B entler suggested that the NNFI reflects th e relative reduction in noncentrality p er degree o f freedom so lhat “it does ap p e ar to have a parsim ony rationale” (p. 241). Bcntlcr argued that RNI was b etter behaved than NNFI in th at (a) sam pling fluctuations are g reater for NNFI than RNI, (b) NNFI estim ates are more likely lo be negative, and (c) when NNFI cxcccds 1, RNI will cxcced 1 by a smaller am ount. H e then pointed out that the standard deviation o f estimates for CFI must be less than o r equal to th at o f RNI. This led Bcntlcr to prefer CFI over RNI, and RNI over NNFI. T he reasons for this preference of CFI and RNI over NNFI, however, d id not seem lo reflect that NNFI is a qualitatively different index from RNI an d CFI. Bentler (1992) subsequently noted com m ents by M cDonald and M arsh (1990) about NNFI, defending the use of the NNFI label instead o f the one recom m ended by McDonald and Marsh. M ore im portantly, recognizing the validity of concerns for parsimony, Bentler (1992) still con ten d ed that “in my cu rren t opinion, the NNFI is not as good as the CFI in m easuring the quality o f m odel fit" (p. 401) and that “I prefer not to mix th e separate criteria offii and m odel parsim ony into a single index” (p. 401). Bentler (1990) also dem onstrated the behavior of NFI, NNFI, IFI, RNI, and CFI in a sim ulation study in which a true and slightly misspecified m odel was fit to the same data varying in sam ple size. He em phasized that the sample size effect was evident in NFI hut not in any of the o th er indices. Consistent with previous research, he rep o rted that the range o f the NNFI (.570 to 1.355 for the true m odel with N - 50) was very large and that th e within-cell SDs for NNFI values were consistently m uch larger than for th e o th e r indices. Focusing on results based on small samples (Ar = 50) with a true model, Bentler noted that the standard deviations (SDs) for CFI were sm aller than for any o f the o th e r indices. However, the CFI SDs are expected to he substantially sm aller for true m odels (where half the values o f CFI would be expected to exceed 1.0 if the values were not truncated) and sm aller sample sizes. Thus, for the slightly inisspecified m odel the SDs for NFI and IFI tended to be as small or smaller than those for the CFI (particularly for N> 50), although the NNFI SDs were still substantially larger th an for the o th e r

11. FIT INDICES FOR STRUCTURAL EQUATION MODELS

323

indices. Bender also noted that for the slightly misspecified model, the NNFIs (mean = .892) were consistently lower than for the other indices (mean = .95). These results led Benller to prefer CFI, noting however that its advantages were at the expense of a slight downward bias (due to the truncation of values greater than 1.0). Population values of CFI and RNI are equal to each other and strictlybounded by 0 and 1. W hereas sample estimates of RNI can fall outside of a 0-1 range, corresponding values of CFI are assigned a value of 0 when RNI < 0 and a value of 1.0 when RNI > 1 (Table 11.1; see also discussion by Gerbing & Anderson, 1993; GofFin, 1993). In practice, this distinction is not very im portant because such extrem e values are rare, values of RNI > 1 or CFI = 1 both lead to the conclusion that the fit is excellent, and values of RNI < 0 o r CFI = 0 both lead to the conclusion that the fit is very poor. In a com parison of RNI and CFI, Goffin (1993) concluded that RNI may be preferable for purposes of model comparison, whereas CFI may be preferred with respcct to efficiency of estimation. In Monte Carlo studies, however, ihe difference between CFI and RNI is particularly im portant when “true" models (i.e., 8, = 0) are considered. For such models, the expected value of RNI is 1.0 (i.e., approximately half the sample estimates will be above 1 and half below) and this value should not vary with A’. T h e expecled value o f CFI, however, must be less than 1.0 for any finite N (because CFI is truncated not to exceed 1.0) and die size of this negative bias should be a systematically decreasing function of N. For this reason, it may be desirable to consider RNI in addition to, or instead of, CFI, al least for M onte Carlo sludics in which a “true” model is fit to the data.

POSSIBLY DESIRABLE CHARACTERISTICS FOR INCREMENTAL INDICES Different researchers have proposed a variety of desirable characteristics that may be useful for increm ental fit indices and their evaluation. Conse­ quently, we next discuss the most widely recom m ended of these criteria, including the effect of sample size, appropriate penalties for model com­ plexity, and rewards for model parsimony, sampling fluctuations, and in­ terpretable metrics. Sample Size Independence Researchers (e.g., Bollen, 1990; Bollen Sc Long, 1993; G erbing & Anderson, 1993) have routinely proposed that a systematic relation between sample size and the values of an index of fit is undesirable, and this characteristic was evaluated in detail by Marsh et al.(1988) and McDonald and Marsh

324

MARSH, BALIA, HAU

(1990). Bollen (1990), attem pting to clarify what is m eant by sample size effect, distinguished between cases in which (a) N is associated with the means of the sampling distribution of an index (the effect of iVemphasized by Marsh et al., 1988, and by Gerbing & Anderson, 1993), and (b) values calculated for an index vary as a function o f A. The critical concern for purposes of this discussion is w hether the expected value o f an index varies systematically with N (Bollen’s first case) and this is what we mean when we refer to a sample size effect or bias. Similarly, Bollen and l.ong (1993) recom m ended the use of indices “whose m eans of their sampling distribu­ tion are unrelated or only weakly related to the sample size” (p. 8), whereas G erbing and Anderson (1993) suggested that an ideal fit index should be independent of sample size in that higher or lower values should not be obtained simply because the sample size is large or small. Penalty for Model Complexity Researchers have routinely recom m ended indices that control for model complexity and reward m odel parsimony (e.g., Bollen, 1989b, 1990; Bollen & Long, 1993; Bozdogan, 1987; Browne & Cudeck, 1989; Cudeck & Browne, 1983; G erbing & Anderson, 1993; Marsh & Balia, 1994; Mulaik ct al., 1989; Steiger & Lind, 1980; Tanaka, 1993; but for possibly alternative perspectives see Bentler, 1992; McDonald & Marsh, 1990; Marsh & Balia, 1994; Marsh & H au, 1994). Marsh and Balia (1994), however, em phasized two very dif­ ferent perspectives of this issue which they referred to as (population) parsimony penalties and (sample) estimation penalties. Penalties for (a lack o f ) parsimony, according to the distinction offered by Marsh and Balia (1994), do not vary with sample size and are intended to achieve a compromise between model parsimony and complexity at the population level. As noted by McDonald and Marsh (1990), such a com­ promise would be needed even if the population were known in that freeing enough param eters will still lead to a perfect fit. Although many alternative operationalizations of parsimony penalties are possible, one popular approach is the set of parsimony indices described by Mulaik ct al. (1989; see also McDonald 8c Marsh, 1990). They recom m ended the use of a penalty that does not vary with X so that their parsimony indices are unbiased (so long as their parsimony correction is applied to indices that are unbiased). They proposed the parsimony ratio, the ratio of the degrees of freedom in the target model and the degrees o f freedom in the null m odel (dfi/df„), as an operationalization of model parsimony. In their im plem entation of this penalty they recom m ended that o th er indices—in­ cluding incremental fit. indices—should simply be multiplied by the parsi­ mony ratio. Thus, for example, Joreskog and Sorbom (1993) define the parsimony NFI (PNF1) as the product of the NFI and the parsimony ratio.

11. n r INDICFS FOR STRUCTURAL F.QUATiON MODF.1S

325

Marsh and Balia (1994) suggested that it would be m ore appropriate to define parsimony indices in terms of indices that did not vary with sample size and evaluated the PRNI based on the product, of the parsimony ratio and the RNI. In their Monte Carlo study, however, they found that parsi­ mony indices overpenalized model complexity in relation to criteria that they considered. Although we do not consider parsimony indices per se in this chapter, McDonald and Marsh (1990) dem onstrated that the NNFI can be expressed as a function of the parsimony ratio, the noncentrality estimate of the target model, and the noncentrality estimate o f the null model (see Table 11.1). Estimation penalties, according to the Marsh and Balia (1994) distinc­ tion, are aimed at controlling for capitalizing on chance when fitting sam­ ple data. From this perspective, Cudeck and Browne (1983; Browne & Cudeck, 1989, 1993; Cudeck & Henly, 1991) claimed that u n d er appro­ priate conditions a dependency on sample size may be appropriate. In particular, they argued that model complexity should be penalized m ore severely for small N where sampling fluctuations are likely to be greater. They supported this contention by showing that less complex models cross­ validated better than more complex models when sample size was small, but lhat more complex models perform ed better when sample size was sufficiently large (see McDonald & Marsh, 1990, for further discussion). Based on this research, they proposed penalties for model complexity that were a function of sample size such that m odel complexity is m ore severely penalized at small jV, moderately penalized for m oderate N, and not pe­ nalized at all for a sufficiently large N. Although this type o f penalty has not been incorporated into any of the increm ental indices considered here, it is interesting to note that the negative sample size bias in NFI works in this fashion. Al small A\ the NFI is negatively biased. W ith in­ creasing N, the size of the bias decreases and for a sufficiently large N it disappears. In contrast, McDonald and Marsh (1990; Marsh, 1994) claimed that the IFI is positively biased—not negatively biased—for small N and that the size of this positive bias decreases with increasing A' Hence, the direction—as well as the existence—of this sample size effect in IFI is particularly worrisome. Consistent with these perspectives, Bollen (1989b, 1990; Bollen & Long, 1993), G erbing and Anderson (1993), and others have argued that an appropriate adjustm ent for model complexity is a desirable feature. For this reason, it is appropriate to evaluate the increm ental fit indices con­ sidered here in relation to this criterion. RNI, CFI, and NFI do not contain any penalty of model complexity'. NNFI (and its norm ed counterpart NTLI first proposed here; see Table 11.1) and RFI do provide a penalty for m odel complexity, but this feature in the RFI is complicated by the sample size bias in this index. T here remains some controversy about the nature

326

MAKSH. BALLA, R-\U

an d appropriateness o f the penalty for m odel com plexity in the IFI. Al­ though Bollen (1989a, 1989b) claim ed th a t the IFI adjusts for d f and G erbing an d A nderson (1993) rep o rted that this is an im p o rtan t strength of the index, M arsh (1995) and M cDonald an d Marsh (1990; see also Equation 4) claim ed lhat ihe n atu re o f this penalty was inappropriate. However, M arsh el al. (1988) found n o support for this claim in their M onte Carlo study. F urtherm ore, Marsh (1995) cited no em pirical support for this suggestion an d apparently n o appropriate M onte Carlo studies ha%'e been conducted to evaluate this aspect o f the IFI. H ence, an o th er purpose o f this ch ap ter is lo provide a stronger lest o f the n atu re of the correction for d f in the IFI. Reliability o f Estimation and an Interpretable Metric Reliability o f esdm ation (i.e., precision o f estim ation an d a relative lack of sam pling fluctuations) is an im portant characteristic o f increm ental lit in­ dices that has n o t b een given sufficient attentio n . In M onte Carlo studies, this feature is typically represented by the within-cell standard deviation of th e estim ates for a particular index. Based on this criterion, m any re­ searchers (e.g., B ender, 1990; Bollen & Long, 1993; G erbing & A nderson, 1993) recom m ended that the NNFI should be considered cautiously be­ cause o f the ap p a ren t large sam pling fluctuations fo und in sim ulation stud­ ies. T he ap p ro priateness of this appro ach rests in p art o n the implicit assum ption that all th e increm ental lil indices vary' along the same under­ lying m etric (i.e., a 0 to 1 m etric in which .90 reflects an acceptable fit). T he juxtaposition of concerns about sam pling fluctuations an d the un­ derlying m etric has n o t received adequate attention. W hereas estim ation reliability is a desirable characteristic th a t is reflected in part by within-cell standard deviations, this is not a fully appropriate basis for evaluating the relative precision o f estim ation in different indices. If, for exam ple, two indices vary along different metrics, then within-cell stan d ard deviations are not com parable. Two trivial exam ples dem onstrate som e o f o u r con­ cerns: (a) if an index has a constant value (say, 1.0) for all correctly and incorrectly specified models, then it will have no within-cell variation; and (b) if an index is m ultiplied by a constant, then th e within-cell standard deviation m usi vary accordingly. A m ore approp riate m easure of reliability o f estim ation should reflect th e relative sizes o f within-cell variation com ­ pared to between-cell variation due to systematic diffcrcnccs in model misspecification. T his situation is analogous to the estim ation o f reliability as a ratio o f true score variance lo lotal variance and not ju s t (raw score) e rro r variance. W hereas there are many ways in w hich this n o tio n could be operationalized, one approach is to evaluate the p ro p o rtio n o f variance due to systematic differences in m odel misspecification. H ence, no vari­

1!. FIT INDICES FOR STRUCTURAL EQUATION MODE1S

327

ation due lo model misspecificalion is explained by an index wilh a constant value in Example a, and variance explained is not affected by multiplication by a constant in Example b. From this perspective, the m inimum condition necessary For evaluating reliability of estimation is to test a variety different models that vary systematically in terms of model misspecification includ­ ing, perhaps, a true model with no misspecification. Because most simu­ lation studies have considered only true models and apparently none has evaluated some index of variance explained as an indication of reliability o f estimation, conclusions based on this previous research must be evalu­ ated cautiously. Alternative approaches to the comparison o f relative sam­ pling fluctuations arc considered in this chapter.

REVIEW O F SIMULATION STUDIES Gerbing and Anderson (1993) evaluated the design of M onte Carlo studies of goodness of fit indices and reviewed results from this research with the primary aim “to provide the substantive researcher with guidance regarding the choice of indices to use” (p. 40). Although Gerbing and Anderson did not limit their consideration to increm ental fit indices, this family of measures was emphasized in their review and is the basis o f discussion presented here. They reviewed the initial research by Bentler and Bonctt (1980), the Marsh ct al. (1988) classification of Type-1 and Type-2 indices, Bollen’s (1986, 1989a, 1989b) indices, and the comparison of CFI and RNI indices. In their evaluation of fit indices, they emphasized sample size effects, appropriate corrections for model complexity, and distributional properties of fit indices. In their discussion of Monte Carlo studies, they emphasized that appropriate compromises between adequacy an d manageability must be achieved in choosing the appropriate design lo study the behavior of particular indices. Design characteristics considered by G erbing and A nder­ son that are particularly im portant to our discussion include the num ber of replications for each cell of the design, the range of sample sizes (i.e., the num ber of cases within cach replication), the range o f models, and the consideration of true and false models. They recom m ended that at least 100 or more replications per cell in the design are needed to provide an accurate estimate of population values. Whereas most early simulation studies evalu­ ated only true models in which data were generated by the model to be tested, they emphasized the need to evaluate incorrect models to test the sensitivity of indices to misspecification. Early sim ulation studies reviewed by G erbing and Anderson (1993) evaluated the ability of true models to fit data varying systematically in sample size (e.g., Anderson & Gerbing, 1984; Boomsma, 1982). Anderson and G erbing (1984), the most comprehensive of these early studies, found

328

MARSH, UALLA, HAU

that NFI was systematically affected by sample size whereas the NNFI was not, but that NNFI had much larger within-cell standard deviations. Ger­ bing and Anderson noted that these trends were replicated by Marsh ct al. (1988), despite the use of only 10 replicauons per cell in that study. Gcrbing and Anderson also reviewed Bollens’s ( 1989a) research, indicating lhat IFI is relatively unaffected by sample size, and “an adjustment for the available d f is provided, in that a model wilh fewer param eters to estimate will provide a higher DELTA2 [IFI] value than a model with m ore pa­ ram eters to estimate” (p. 53). They concluded that IFI was better than NFI because it was free from the sample size bias and better than NNFI because il had considerably smaller standard errors. G erbing and Anderson (1993) then evaluated recent developments in indices based on nonccn­ trality discussed earlier, noting the similarity in the CFI and RNI indices independently developed by Bernier (1990) and McDonald and Marsh (1990), respectively. Based on their review, Gerbing and Anderson recom­ m ended two increm ental fit indices, RNI (or its bounded counterpart CFI) and IFI. They also recom m ended that further Monte Carlo research was needed to study the behavior of misspecificd models in which true paths were omitted and false paths were included. T heir recom m endation is also considered in this chapter. Further Consideration o f IFI G erhing and Anderson (1993) reviewed Marsh et al. (1988) and McDonald and Marsh (1990), but apparently did not realize lhal IFI had been pre­ viously proposed by Marsh et ai. under a different nam e and further criti­ cized by McDonald and Marsh. Based on their review o f Monte Carlo studies, G erbing and Anderson recom m ended the IFI because it was ap­ parently unbiased and provided an apparently appropriate adjustm ent for d f lhat penalized a lack of parsimony. However, the m ore critical evaluation of IFI offered by McDonald and Marsh suggests that Gcrbing and A nder­ son's recom m endation may have been prem ature. T here seems to be an unresolved controversy about IFI reflecting suggestions by McDonald and Marsh (1990) and Marsh (1995), based on the mathematical form o f the index and empirical Jesuits from Monte Carlo studies such as those re­ viewed by G erbing and Anderson (1993). The resolution o f this controversy requires a stronger Monte Carlo study in which sample sizes and the degree of misspecificaiion in the approxim ating model arc vaiied systematically over a wider range of values than considered in previous studies of this index, and in which there are enough replications particularly for small sample sizes where sampling fluctuations are larger. Furtherm ore, whereas most of this controversy has focused on conditions u n d er which IFI is or is not related to sample size, Marsh (1995) argued that the correction for

11. FIT INDICES FOR STRUCTURAL EQUATION MODELS

329

m odel com plexity in th e IFI is also inappropriate an d opposite to the di­ rection inferred by G erbing and A nderson (1993) an d others. A lthough these claims apparently run co u n tcr to im plications offered elsewhere, th ere seems to be no relevant M onte Carlo studies o f the behavior o f this aspect o f the IFI. Further Consideration o f NNFI M cDonald and M arsh (1990; see also Rentier, 1990) d em onstrated that when the NNFI was expressed in term s o f the nonccntrality an d the p ar­ simony ratio (Equation 5), il becam e apparen t th at NNFI was qualitatively different from o th e r increm ental fit indices such as NFI and RNI. T he NNFI, by its m athem atical form (M cDonald & M arsh, 1990) an d on the basis o f M onte C arlo results (Marsh 8c Balia, 1994), provides a parsim ony correction— a penalty for m odel complexity. NNFI = = = =

fd0/d f 0 - d /d f.J /rd o /d fo ] 1 - [(dt/d f ,) / (d ,/d f,)] l - [ ( d t/ d f [)x (d f„ /d „ )] 1 - [(dt/ d 0) x (df«/df,)]

(5)

= 1 - [(d ,/d „)/T R j

w here PR = d f,/d f0 is the parsim ony ratio recom m ended by Mulaik et al. (1989). H ence it is clear that NNFI incorporates the parsim ony ratio recom ­ m ended by Mulaik et al. (1989) and satisfies prcfercnccs by Bollcn and Long (1993), G erbing an d A nderson (1993), an d T anaka (1993) for fit indices that control for m odel complexity by taking into account the de­ grees of freedom of a m odel. T he form o f this adjustm ent for d f is ap p ro ­ priate in that m odel complexity' is penalized. M arsh and Balia (1994; see also M cDonald 8c M arsh, 1990) also discuss how the penalty’ function in the NNFI differs from that in o th e r indices such as those discussed by Cudeck and H cnly (1991) and by Mulaik ct al. (1989). Marsh (1995) suggested that this property of the NNFI may be particu­ larly useful in tests o f nested m odels (because the NNFI penalizes m odel complexity). Thus, for exam ple, th e inclusion o f additional param eters (decreases in d f j can result in a lower NNFI when the im provem ent in fit (decreases in d,) is sufficiently small even though the RNI can never be smaller. Conversely, the NNFI rewards m odel parsim ony. T hus, for exam ­ ple, the im position o f equality constraints lo lest the invariance of solutions over m ultiple groups may actually result in a higher NNFI even though the RNI can never be higher (e.g., Marsh 8c Byrne, 1993; M arsh, Byrne, & Craven, 1992). Similarly, a hig h er o rd er factor m odel that is nested

330

MARS1I, BALLA, 1IAU

u n d e r the corresponding first-order m easurem ent m odel can have a hig h er NNFI even though the RNI can never be higher. This feature o f the NNFI provides o n e potentially useful decision rule (i.e., A cccpt the m o re parsi­ m onious m odel if its NNFI is equal to or b etter than the less parsim onious m odel) for concluding that the difference between a m ore com plex model and a m ore parsim onious m odel is not substantively im portant, an aspect of fit not captured by the RNI. Indeed, researchers typically in te rp re t values of greater than .90 as “acceptable” for increm ental fit indiccs like the RNI, but th e re appears lo be n o com pelling rationale for this rale of thum b. Also, the RNI provides no objective decision rule for choosing between alternative models lhat result in RNIs greater than .9 b u t differ substantially in term s o f parsimony. W hereas th e application o f the RNI decision rule logically leads to the selection o f the least parsim onious model in a set o f nested models, the NNFI may lead to the selection o f a m odel with inter­ m ediate complexity. Bccausc the n ature o f the penalty for m odel com ­ plexity em bodied in the NNFI is not broadly recognized, there is a need for m ore rcscarch to evaluate this aspect o f the NNFI-dccision-rule. T he m ajor criticism o f th e NNF1 has been its large sam pling fluctuations. However, interpretations o f previous research m ust be m ade cautiously because o f the apparently overly simplistic m a n n er in which reliability of estim ation has been assessed, and this conccrn will be an im portant focus of o u r discussion in the chapter. It is also apparen t, however, th a t NNFI is extrem ely unstable in som e situations. Thus, for exam ple, A nderson an d G erbing (1984) reported extrem e NNFIs far in excess o f 1.0 when sam ple size was very small. Inspccdon of the definition of NNFI (Table 11.1 an d Equation 5) dem onstrates th a t the index is undefined du e to division by zero w hen dft) = 0 (i.e., the null m odel is able lo fit the data) or df, - 0 (i.e., the target m odel is the saturated m odel), an d is likely to be very unstable in situations approxim ating these conditions. W hereas it would be extrem ely unlikely for population values o f noncentrality to approach zero for the null m odel (i.e., for the null m odel to be “tru e”), this can h appen for sam ple estim ates based on small As (e.g., N < 50), an d this apparently accounts for the extrem ely large values of NNFI reported in som e sim ulation studies. A viable strategy to avoid such extrem e values an d associated problem s with large sam pling fluctuations is to develop a norm ed co u n terp art o f th e NNFI. This strategy is som ew hat analogous to the com parison of RNI an d its norm ed co u n terp art CFI. In the case of the NNFI, however, the advantages o f a n o n n e d co u n terp art are likely to be m uch g reater because extrem e values of NNFI are apparently m uch m ore likely than those for RNI (Bentlcr, 1990). Thus, anyone preferring the n orm cd CFI over the RNI, its u n n o rm ed co u n terp art, should also prefer the tiorm ed version o f the NNFI over the u n n o rm ed version that has been so severely criticised for problem s related to sam pling fluctuations.

11. FIT INDICES FOR STRUCTURAL EQUATION MODELS

331

Following McDonald anti M arsh’s (1990) recom m endation that ihe label Tucker-Lewis index (TI.I) should be used instead of NNFI, a new index, the norm cd Tucker-Lewis index (NTLI) is defined according lo the fol­ lowing rules: 1. If NNFI < 0 or d0 < 0 then NTLI = 0, but 2. If NNFI > 1 or d, < 0 (including df, = 0, and both d, and d„ < 0) then NTLI - 1.0; otherwise 3. NTLI - NNFI, where NNFI is defined as in Equation 5 and Table 1L1. (In order to em pha­ size more clearly which indices are merely norm cd versions o f unnorm ed indices our preferred labels for NNFI, RNI, and their norm ed counter-parts would be TLI [instead of NNFI], NTLI, RNI, and NRNI [instead of CFI], but popular usage of the labels NNFI and CFI may not allow for such a logical nom enclature). Hence, NTLI is strictly tiormed for sample and population values, is defined when NNFI is undefined (i.e., d„ = 0 and for the saturated model with df; = 0), and takes on its maximum value o f 1.0 whenever d, < 0. NTLI should have smaller within-cell SDs than NNFI, particularly when ,Vis small, df, approaches 0, or 8„ approaches 0. This smaller wilhin-cell SD, however, is at the expense of a slight downward bias in NTLI that should be evident when 8, approaches 0, particularly with small N.

A PRIORI PREDICTIONS FOR THE PRESENT ANALYSIS In this section, wc evaluate the behavior of the NFI, IFI, RFI, NNFI, NTLI, CFI, and RNI in a large Monte Carlo study. Based on a known population model used to generate simulated data (see Fig. 11.1), models are hypothe­ sized that are true (i.e., misspecification is zero), over-specified (i.e., super­ fluous p aram eters known to be zero in th e p o p u latio n are ad d e d ), parsimonious (i.e., parameters known to be equal in the population arc constrained to be equal), and misspecified lo varying degrees (varying num ­ bers of param eters known to be nonzero in the population arc constrained to be zero). We include a wide range of different sample sizes (100 to 5,000) and include more replicates in cells where small samples sizes pro­ duce large sampling fluctuations (e.g., 1,000 replicates in cells with \ 7100). For purposes of this analysis, we focus on the com parison of the RNI with the other increm ental fit indiccs. This is appropriate because il is well established that the RNI is not systematically related lo .Vand contains no penalty for model complexity (see McDonald & Marsh, 1990, and earlier discussion), thus providing a basis of comparison for the oth er indices in

332

MARSH. BALLA, HAU

FIG. 11.1. Population generating m odel used to create the sim ulated data.

relation to these characteristics. Consistent with earlier discussion an d p re­ vious research, it is predicted that: 1. RFl and NFI are positively related to sam ple size for all approxim ating models. 2. RNI and its no rm ed counterpart, CFI, are identical for most cases, cxcept for true m odels o r nearly true m odels tested with small As where RNI will be greater than CFI (whenever RNI > 1) an d CFI will be positively related to sam ple size. W hereas it is technically possible for RNI to be negative and thus sm aller than CFI (e.g., w hen d„ < 0), sim ulated data considered here are unlikely to result in this occurrence. 3. NNFI and its norm ed counter-part, NTLI, are also identical for most cases. T he most likely exceptions are: (a) true models o r nearly true models tested with small Ns w here NTLI will be sm aller than NNFI (whenever NNFI > 1) and NTI .1 will be positively related lo sample size; or (b) relatively unusual cases w hen the null m odel is able to fit the sample data du e to sam pling fluctuations so that NNFI is unstable o r undefined. 4. RNI is relatively unrelated to sample size for all approxim ating m od­ els. For all true m odels with misspccification o f zero, including those with invariance constraints an d superfluous param eters, the m ean value of RNI is 1.0. For misspecified models, the inclusion o f superfluous param eters leads to higher RNIs w hereas the im position of equality constraints leads lo lower RNIs (i.e., RNI rewards the inclusion o f superfluous param eters an d penalizes parsim ony). 5. NNFI is relatively unrelated to sample size for all approxim ating models. For all true models with misspecification of zero, including those with invari­ ance constraints and superfluous parameters, the m ean value of NNFI is 1.0. For misspecified models, the inclusion of superfluous param eters leads to lower NNFIs whereas inequality constraints lead to higher NNFIs (i.e., NNFI penalizes the inclusion o f superfluous param eters and rewards parsimony). Thus, in contrast to the RNI (and CFI), NNFI (and NTLI) provides an ap­ propriate penalty for model com plexity an d reward for m odel parsimony.

11. FIT' INDICES FOR STRUCT URAL EQUATION MODELS

333

0. IFI is relatively unrelated to sample size for all true approxim ating models, but IFI is negatively related to sample size for all misspccificd ap­ proximating models. For all true approxim ating models, including those with invariance constraints and superfluous param eters, the mean value o f IFI is 1.0. For misspecified models, the inclusion of superfluous param eters leads to increases in IFI that are as large or larger than those found with RNI (i.e., IFI rewards the inclusion of superfluous parameters as much or more than RNI). The imposition of equality constraints known to be true in the population leads to decreases in IFI that are as large or larger than those found with RNI (i.e., IFI penalizes parsimony as much or more than RNI). In addition to these a priori predictions, we also consider research questions about the reliability of estimates in different increm ental fit indices. In particular, as discussed earlier, we com pare results based on the within-cell standard deviations (the traditional approach) and true score variation due to the level of model misspedfication. More specifically, we com pare results based on these alternative approaches to evaluate sugges­ tions that NNFI has greater sampling variation than the other increm ental fit indiccs and to extend this consideration to the new NTLI fix index.

METHODS Analyses were conducted with the PC version o f I.ISRKI.8 (Joreskog & Sorbom, 1993) and the accompanying GF.NRAW procedure. A population covariance matrix was generated from a population model (Fig. 11.1) derived from a multiple-indicator simplex model (Marsh, 1993; see also Marsh & Hau, 1994). In this hypothetical model, a set o f three indicators of a single laient variable is adm inistered on each o f three occasions. Each measured variable is substantially related to its latent variable (.6), b u t also has a substantial com ponent of m easurem ent erro r (.64). Residual covari­ ances for the same measured variable adm inistered on different occasions, autocorrelaied errors, are moderate (.4). A population covariance matrix based on Fig. 11.1 was constructed with LISREL.8 and a total of 100,000 cases were simulated from this population covariancc matrix by GENRAW (see discussion of Monte Carlo studies with LISRF.L8 by Joreskog & Sorbom, 1993). In order to evaluate the effects of sample size, the 100.000 cases were divided into 1,000 replicates of N= 100 cases, 500 rcplicates of N= 200, 200 replicates of N = 500, 100 replicates of A' = 1000, 50 replicates of N = 2000, and 20 replicates of A/= 5000. By bolding the total num ber of cases (sample size x num ber of replicatcs) constant across cach sample size, the standard error of the m ean for each cell of the design was m ore nearly equal than would be the case if ihe num ber of replicates was held constant. This is a

334

MARSH, BALIA. HAU

particularly desirable feature in th e evaluation o f sample size effects because typically estimates based on small sample sizes are of particular interest, but the sam pling variability of these estim ates is systematically higher th an those based on larger sample sizes w hen the n um ber o f rcplicates is held constant for all sam ple sizes. In o rd er to evaluate th e effects o f varying degrees of inisspecification, to u r approxim ating m odels were lit lo the data. Model 1 was a “tru e” m odel wilh n o inisspecification (all nonzero param eter estimates in ih e population generating m odel, Fig. 11.1, w ere freely estim ated). M odels 2, 3, an d 4 differed from M odel 1 in that o n e (M odel 2), three (M odel 3), or all nine (M odel 4) of the nine nonzero correlated uniquenesses in the population were constrained to be zero in the approxim ating m odel. In order to evaluate the cffccts of invariance constraints known to be true in the population and superfluous param eters known to I k * zero in the population, three versions o f each m odel were considered. In Version 1, n o superfluous p aram eters or invariancc constraints were considered. In Version 2, three additional correlated uniquenesses known to be zero in the population (the o p eration­ alization of superfluous param eters considered here) were freely estimated. In Version 3, all nine uniquenesses that were known to be equal in the population were constrained to be equal (th e operationalization of a m ore parsim onious m odel considered h e re ). For purposes o f this analysis we refer to the versions as normal (Version 1), overfit (Version 2), and parsimonious (Version 3). In summ ary, the analysis had 72 cells representing all com bina­ tions of 6 (sample sizes) x 4 (models) x 3 (versions). As previously indicated, seven increm ental indices (Table 11.1) were considered. The m ean, standard deviation, an d standard erro r were com ­ puted for all indices in each o f the 72 cells o f the study, excluding only the nonconvergcd solutions (see G crbing & A nderson, 1987, 1993, for a rationale for this app ro ach ). Because a particular em phasis o f the analysis was on the com parison of RNI with each o f the o th e r indices, difference scores were also sum m arized in which the values for each index was sub­ tracted from RNI. In o rd e r to assess the relative size o f the various effects and to provide a nom inal test of statistical significance, a three-way ANOVA was conducted in which the effects o f ihe six sam ple sizes, the four models, an d the three versions were considered.

RESULTS AND DISCUSSION Mean Values o f the Incremental Fit Indices CFI vs. RNI. T he results are sum m arized in Tables 11.2 and 11.3 and in Fig. 11.2. CFI and RNI are identical in most cases. However, for all

TABLE 11.2 Goodness of Fit Indices: Correlations wilh Log Sample Size (r), Mean and Standard Error o f the Mean for Each o f the Twelve Cells (4 Models x S Versions; see Fig. 11.2) Index r

Version RNI

I 2

3 NNFI

1 2

3 D NNFI RNI

IFI

1

- .0 0 2 2

.0179 .0180

3 I

.0479 .0080 -.0016 .0485 .0006 .0095 -.0352 .7298 .7009 .7782 .8336 .8180 .8973

1 2

3 I 2

DRFIRN1

.0479 .0159

- .0 0 2 2

3

RFI

- .0 0 2 2

2

2

DIFI RNI

.0074

3 I 2

3

Mean .9997 1 .0 0 0 0

.9984 .9987 1 .0 0 0 0

.9976 - .0 0 1 0 0 0 0 0

-.0008 .9997

SE

r

Mean

SE

r

Mean

.0004 .0003 .0005

-.0084 -.0157 .0303 -.0084 -.0157 .0303 -.0084 -.0157 .0303 -.0515 -.0644 .0048 -.5099 -.4968 -.5346 .4968 .4552 .6377 .7337 .6515 .8808

.9452 .9517 .9384 .8841 .8757 .9113 -.0612 -.0759 -.0271 .9479 .9543 .9101 .0026 .0026 .0018 .8084 .8008 .8333 -.1368 -.1509 -.1051

.0007 .0006 .0007 .0015 .0016

-.0272 -.0474 .0173 -.0272 -.0474 .0173 -.0272 -.0474 .0173 -.1189 -.1478 -.0289 -.7652 -.7511 -.7535 .2828 .2344 .4467 .5789 .4532 .8626

.8068 .8333 .8012 .6340 .6249 .7349 -.1729 -.2084 -.0663 .8151 .8416 .8059 .0083 .0083 .0046 .5798 .5713 .6721 -.2271 -.2620 -.1291

.0 0 1 0

.0009 .0007 .0007 .0006 .0 0 0 2

.9985

.0004 .0003 .0005

0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

1 .0 0 0 0

.9137 .9143 .9122 -.0860 -.0857 -.0862

.0 0 1 2

.0013 .0 0 1 2 .0 0 1 0 .0 0 1 1 .0 0 1 0

Model 4

Model 3

Model 2

Model i

.0 0 1 1

.0008 .0 0 1 0

.0003 .0006 .0006 .0007 .0 0 0 1 .0 0 0 1 .0 0 0 0

.0016 .0017 .0013 .0 0 1 1 .0 0 1 2

.0009

SE .0 0 1 0

.0009 .0 0 1 1

.0019 .0 0 2 1

.0015 .0009 .0 0 1 2

.0004 .0 0 1 0

.0009 .0 0 1 1 .0 0 0 1 .0 0 0 1 .0 0 0 1

.0018 .0 0 2 0

.0015 .0009 .0 0 1 1

.0008

r

Mean

SE

-.0752 -.0666 .0129 -.0752 -.0666 .0129 -.0753 -.0660 .0129 -.1702 -.1831 -.0114 -.8260 -.8222 -.8095 .0900 .0779 .2798 .5164 .3490 .8994

.5573 .5964 .5472 .3624 3396 .5060 -.1948 -.2568 -.0412 .5697 .6107 .5508 .0124 .0143 .0036 .3314 .3105 .4630 -2 2 5 8 -.2859 -.0842

.0015 .0014 .0016 .0 0 2 1

.0023 .0018 .0006 .0009 0 0 0 1

.0014 .0013 .0016 .0 0 0 2 .0 0 0 2 .0 0 0 0

.0019 .0 0 2 0

.0017 .0005 .0007 .0005

335

(Ccmimued)

336 TABLE 11.2 (Continued) Model 1

Index r

Version CFI

1 2

3 DCFIRNI

1 2

3 NFI

1 2

3 DNFIRNI

1 2

3 NTLI

1 2

3 DNTLIRNI

1 2

3

.2458 .2345 .2725 .2886 .2931 .2713 .7290 .7009 .7782 .8934 .8927 .8945 .1935 .2345 .2725 .2331 .3871 .4550

Model 2

Mean

SE

r

Mean

SE

r

.9941 .9949 .9917 -.0057 -.0051 -.0067 .9617 .9691 .9415 -.0380 -.0309 -.0569 .9861 .9858 .9876 -.0136 -.0142 -0 1 0 8

.0003

-.0018 -.0083 .0371 .0813 .0922 .0717 .4968 .4552 .6377 .8880 .8841 .8924 -.0018 -.0083 .0371 .0044 -.0033 .0521

.9450 .9514 .9381 -.0003 -.0003 -.0003 .9095 .9225 .8842 -.0357 -.0291 -.0541 .8835 .8750 .9109 -.0618 -.0767 -.0275

.0007 .0006 .0007

-.0272 -.0474 .0173 -.0063 -.0335 -.0299 .2828 .2344 .4467 .8598 .8491 .8821 -.0272 -.0474 .0173 -.0272 -.0474 .0173

.0 0 0 2

.0003 .0 0 0 2 .0 0 0 2

.0003 .0005 .0005 .0008 .0004 .0004 .0007 .0008 .0006 .0005 .0006 .0004 .0003

Model 4

Model 3

.0 0 0 1 .0 0 0 1 .0 0 0 1

.0007 .0006 .0009 .0004 .0003 .0006 .0014 .0016 .0 0 1 0

.0007 .0009 .0003

Mean .8068 .8333 .8012

SE .0 0 1 0

.0009 .0 0 1 1

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.7782 .8095 .7541 -.0286 -.0238 -.0471 .6340 .6249 .7349 -.1729 -.2084 -.0663

.0 0 1 0

.0009 .0 0 1 1

.0003 .0003 .0006 .0019 .0 0 2 1

.0015 .0009 .0 0 1 2

.0004

r -.0752 -.0666 .0129 -.0005 -.0157 -.0486 .0900 .0779 .2798 .7816 .7502 .8576 -.0752 -.0666 .0129 -.0753 -.0667 .0129

Mean

SE

.5573 .5964 .5472

.0015 .0014 .0016

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.0 0 0 0

.5357 .5786 .5077 -.0215 -.0178 -.0395 .3624 .3396 .5060 -.1948 -.2568 -.0412

.0013 .0 0 1 2

.0015 .0003 .0003 .0005 .0 0 2 1

.0023 .0018 .0006 .0009 .0 0 0 1

Note. Considered here are seven goodness of fit indices (RNI, NNFI, IFI, CFI, RFI, NFI, NTLI) and six sets o f difference scores determined by subtracting RNI from each index (DNNFIRNI, DIFIRNI, DRFIRNI, DCFIRNI, DNTLIRNI). Within each o f the 12 combinations of model and version, the mean and standard error o f each index was calculated and the fit indiccs were correlated with the log sample size, rs > .040 are statistically significant (p < .05, two-tailed).

TA B LE 11.3

Variance (Sum of Squared Deviations) Attributable o f Sample Size, Model, and Version for Six Indices Source Size (S) Model (M) Version (v) Sx M Sx V MxV Sx MxV Explained Residual Total Variance Due to (eta) Residual Model M + V + MxV

DF 5 3 2

15 10 6

30 71 22327 22398

IFI

RNI

.233* 595.143* 2.994* .231* .1 0 2 * 1.979* .069 600.827* 37.183 638.009

.016 623.147* 2.335* .039 .041 1.445* .019 627.088* 40.345 667.433

.241 .966 .970

.246 .966 .969

CFI .0 0 1

611.923* 2.401* .115* .044* 1.386* .017 615.929* 39.137 655.065 .244 .966 .969

NNFI

RFI

NFI

.059 1169.808* 26.492* .094 .104 19.119* .039 1215.427* 106.954 1322.381

18.780* 978.035* 22.252* 2.904* .262* 16.040* .127 1037.835* 87.912 1125.747

5.874* 578.804* 8.607* .392* .855* 1 .2 0 0 * .013 595.736* 33.252 628.988

.284 .941 .959

.279 .932 .950

.230 .920 .967

NTLI .0 0 2

1133.622* 27.192* .402* .080 18.442* .057 1179.491* 101.737 1281.228 .282 .941 .960

Note. Considered here are seven goodness o f fit indices (RNI, NNFI, IFI, CFI, RFI, NFI, NTLI). Variance com ponents were derived from a 6 (Sample Size) x 4 (Model) x 3 (Version) analysis o f variance conducted separately for each index that also resulted in nominal tests of statistical significance for each effect. *p < .01.

337

.5

A

.7

Mean RFI

A

N D

.0

.8

1.0 -.30

-.25

-.20

-1 5

-.10

-.05

0 00

Mean RFI - RNI Difference

M2V2 M2V3 M3V1

V I!

M3V2

S

MJVJ

R

O

M4V1

N

M4V2 M4VJ .5

6

7

Mean IFI

.8

Mean IFI - RNI Difference

FIG. 11.2. (Continued)

338

.9

1.0 OOOC

00S .010

015 .020

Mean NNFI - RNI Difference

XiVdf]), or • have no systematic effect if misspecification in the constraints is the same as lhat in the original target model (i.e., %2V df2 = XiVdfj). 4. T he change in NNFI due to the addition o f new param eters to the original model will depend on the degree o f misspecification in the original target model and the degree of misspecification due to not

11. HT INDICES FOR STRUCTURAL EQUATION MODELS

351

allowing the new param eters to be freely estimated. The rationale for this statem ent follows that from constraints in Item 3. CHAPTER SUMMARY T he RNI and NNFI were both well behaved in the present com parison in that values of these indices were relatively unrelated to sample size. How>ever, these two indices behaved differently in relation to the introduction o f superfluous param eters and of equality constraints. RNI has no penalty for m odel complexity' or reward for model parsimony, whereas NNFI pe­ nalizes complexity and rewards parsimony. In this respcct, the tw'o indiccs reflect qualitatively different, apparently com plimentary characteristics. Based on these results, we recom m end that researchers wanting to use increm ental fit indiccs should consider both RNI (or perhaps its norm cd counterpart, CFI) and NNFJ (or perhaps its norm ed counterpart, NTLI). T he juxtaposition between the two should be particularly useful in the evaluation of a series of nested or partially nested models. The RNI provides an index of the change in fit due to the introduction of new param eters o r constraints on the model, but will typically lead to the selection o f the least parsimonious m odel within a nested sequence. H ere the researcher must use a degree of subjectivity in determ ining w hether the change in lit is justified in relation to the change in parsimony. The NNFI em bodies a control for model complexity and a reward for parsimony such that the optimal NNFI may be achieved for a m odel of interm ediate complexity. This juxtaposition of the two indices was clearly evident in the present comparison in that the introduction o f superfluous param eters (Version 2 of each misspecified model) led to an increase in RNIs b u t a decrease in NNFIs, whereas the imposition of equality constraints (Version 3 of each misspecified model) led to a decrease in RNIs but an increase in NNFIs. W hereas these increm ental indices may be useful in the evil nation of a single a priori model considered in isolation, we suggest that they will be even more useful in the evaluation of a series of viable alternative models, particularly if the set of alternative models are nested o r partially nested. It is im portant to emphasize that the recom m endation of the RNI and NNFI does not preclude the use o f other, nonincrem ental fit indiccs. Along with Bollen and Long (1993) and G erbing and Anderson (1993), we rec­ om m end that researchers consider the most appropriate indices from dif­ ferent families of measures. It is, however, im portant to critically evaluate the alternative indices in different families of measures as wc have done with the increm ental indices. More generally, it is im portant lo reiterate that subjective indices of fit like those considered here should be only one com ponent in the overall evaluation of a model.

352

MARS1I, BALLA, HAU

REFERENCES A nderson, J. C., 8c G erbing, D. W. (1984). T h e c/Tcc.t o f sam pling e rro r on convcrgcncc, im proper solutions, and goodness-of-fit indices for maximum likelihood confirm atory factor analysis. Psychometrika, 49, 155-173. Bender, P. M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238-246. Bender, P. M. (1992). O n the fit o f m odels to covariances an d m ethodology to the JhUtetin. Psychological Bulletin, 112, 400-404. Bender, P. M.. & Bonett. D. G. (1980). Significance tests and goodness o f fit in the analysis o f covariance structures. Psychological Bulletin, 88, 588-606. Bollen, K. A. (1986). Sample size and B entler an d Bonetts no n n o rm ed fit index. Psychometrika, 51, 375-377. Bollen, K. A. (1989a). A new increm ental fit index for general structural equation models. Sociological Methods and Research, 17, 303-316. Bollen, K. A. (1989b). Structural equations wilh latent variables. New York: Wiley. Bollen, K. A. (1990). Overall fit in covariance structure models: Two types o f sam ple size effects. Psychological Bulletin, 107, 256-259. Bollen, K. A., 8c Long, J. S. (1993). Introduction. In K. A. Bollcn & J. S. Long (Eds.), Testing strut tural equation models (pp. 1-9). Newbury Park. CA: Sage. Boomsma, A. (1982). T he robustness o f LISREL against small sam ple size in factor analysis models. In K. G. Joreskog & H. Wold (Eds.), Systems under indirect observation: Causality, structure, prediction (Part I, pp. 149-173). Amsterdam: N oilh-H olland. Bozdogan, H. (1987). Model sclcc.tion and Akaike’s inform ation criterion (AIC): T h e general theory and its analytical extensions. Psychcmetrika, 52, 345-370. Browne, M. W., Sc Cudeck, R. (1989). Single-sample cross-validation indices for covariance structures. Multivariate Behavioral Research, 24, 445-455. Browne. M. W., & Cudeck, R. (1993). A lternative ways o f assessing m odel fit. In K, A. Bollen J. S. Long (Eds.), Testing structural equation models (pp. 136-162). Newbury Park, CA: Sage. Cudeck, R., 8c Browne, M. W. (1983). Cross-validation o f covariance structures. Multivariate Behavioral Research, 18, 147-167. Cudeck, R., 8c Henly, S. J. (1991). Model selection in covariance structures analysis and the “problem ” o f sample size: A clarification. Psychological Bulletin, 109, 512-519. Gerbing, D. \V\, & A nderson, J. C. (J987). Im proper solutions in the analysis o f covariance structures: T h e ir in terp re lability a n d a com parison o f alternative specifications. Psychometrika, 52, 99-111. G erbing, D. W., 8c A nderson, J. C. (1993). Monte Carlo evaluations o f goodnessof-fu indices for structural equation models. In K. A. Bollen 8c [. S. Long (Eds.), Testing structural Mfuutiun models (pp. 40-65). Newbury Park, CA: Sage. Goffin, R. D. (1993). A com parison o f two new indices for th e assessm cn to f fit o f stnictural equation models. Multivariate Behavioral Research, 28, 205-214. Joreskog, K. G. (1993). Testing structural equation models. In K. A. Bollen Sc J. S. Long (Eds.), Testing structural equation models (pp. 294-316). Newbury Park, CA: Sage. Joreskog, K. G., 8c Sorbom, I>. (1993). LISREL 8: Structural equation modeling with the SfM PiJS commatid language. Chicago: Scientific Software International. M arsh, II. W. (1993). Covariance stability in multiwave panel studies: Com parison o f simplex models and one-factor models. Journal oj Educational Measurement, 30, 157-183. Marsh, 11. \V. (1995). T he A2 and %212 fit indices for stnictural equation models: A brief note o f clarification. Structural Equation Modeling, 2(3), 246-254.

11. FIT INDICES FOR STRUCTURAL EQUATION MODELS

353

Marsh, H. W\, Sc Balia, J. R. (26 February, 1986). Goodness-of-Jit indices in confirmatory factor analysis: The effect of sample size. (ERIC D ocum ent R eproduction Service No. ED 267 091). Marsh, H . W., & Balia, J. R. (1994). Goodness-of-fit indices in confirm atory factor analysis: T he effect o f sam ple size and model complexity. Quality Quantity, 28, 185-217. Marsh, II. W., B alia,}. R., Sc McDonald. R. P. (1988). G oodness of-fn indices in confirmatory factor analysis: T he effect o f sam ple size. Psychological Bulletin, 102, 591-410. Marsh, H. W., 8c Byrne, B. M. (1993). Confirm atory factor analysis o f m ultkrait-m ultim ediod self-concept data: Betwcen-group and within-group invariance constraints. Multivariate Be.haviorcd Research, 28, 315-349. Marsh, H. W., Bvrne, B. M., 8c Craven, R. (1992). O vercom ing problem s in confirm atory factor analysis o f MTMM data: T he correlated uniqueness model and factorial invariance. Multivariate Behavioral Research, 27, 489 -507. Marsh, II. W., Sc H au, K .T. (1994). Assessing goodness o f Jit: When parsimony is undesirable. U npublished m anuscript. McDonald. R. P. (1989). An index o f goodness-of-fit based o n nonccntrality. Journal o f Clas­ sification. 6, 97-103. McDonald. R. P, & Marsh, H. W. (1990). Choosing a multivariate m odel: N oncentrality and goodness-of-fiL Psychological Bulletin, 107, 247-255. Mulaik, S. A., fames, L. R.f Van Alstine. }., B ennett, N., Lind, S., Sc Stilwell, C. D. (1989). Evaluation o f goodness-of-fil indices for structural equation m odels. Psychological Bulletin, 105, 430-445. S obel,M .E .,& B ohm stedt, G. W. (1986). Use o f null models in evaluating th e lit of covariance structure models. In N. B. T um a (Ed.), Sociological metlwdolvgy 1985 (pp. 152-178). San Francisco: Jossey-Bass. Steiger, | . (1990). Structure model evaluation and m odification: An interval estimation ap­ proach. MuUiv'triate Behavioral Research, 25, 173-180. Steiger, J. H., Sc Lind, J. M. (May, 1980). Sl, 47 F.QS, 1, 151, 160. 212, 217, 315 1ILM, 108, 151 LISCOMP, I, 102, 114, 120, 151 I.ISREI., 46-53, 66-70, 120, 152-156, 184-193, 315

Direct p ro d u c t m odel, 8, 3 2 -3 9 , 4 0 -46 Dynam ic factor analysis, 175-177, 185-192

E E quivalent m odels, 4, 279-304 E stim ator, 4-248 g en eralized least sq u ares, 227 m axim um likelihood, 4. 61, 62, 227, 245, 246, 247, 248

361

362

SUBJECT INDEX

E stim ator (Conl.) two-stage least squares, 4, 227 w eighted least squares, 62-64 E q u a t i o n o f Fit, 5-351 A ka ik e’s inform ation criterio n , 164, 301-303 chi-square test, 519, 350 com parative fit index, 327, 328. 334 H a n n a n -Q u in n criterion, 164 increm ental fit index, 315, 319, 320, 328, 334, 345-351 n o n n o rm e d fit index, 164, 316-318, 320, 329, 340, 347 nonrned Fit in d ex , 164, 316-318, 329, 349 overall, 5, 315 relative fit index, 319, 345 relative noncentrality index, 346, 347 Schw arz’s Bayesian inform ation criterio n , 164 Tuckcr-1 .cwis index, 316, 310

F F actor analysis. 264-269 F ull-inform ation estim ation, 4, 243-276

M M axim um likelihood, see E stim ator M easu rem en t m odel, 139 M eredith-T isak ap p ro a c h , 127 M issing d ata, 4 -5 , 243-276 M odel com plexity, 301 “ 303, 324—326 M odel testing, 207-210 Multilevel m odels. 3, 89-121, 114-119 M ultivariate m oving average, 167 M ultitrait-m ultim ethod design, 2, 7 -5 3

N N o n lin ear m odels, 3, 57-61 N o n n o rm e d fit in d ex , see Evaluation o f fit N o n p aram etric bootstrap, 197-199 N o rm ed 1lt in d ex , see E valuation o f fit

P P-rec.hnique facto r analysis. 173-175 Pairwise d eletio n , 4, 244 , 245, 249, 250 P ercen tile m e th o d , 205 Pow er, 218-222 P ro d u c t variables, 60, 61, 70-73

G G eneralized least squares, see E stim ator G oodness o f Fit, see Evaluation o f fit G row th m odels. 3, 106-112, 125-156

H lla n n a ii'Q u iu n criterion, w Evaulation o f fit H eleroscedastic disturbances, 234-238

I Im p u ta tio n , 271-275

R R andom shocks, 163, 164 R andom walk m odel, 172, 173

s Schw arz’s criterio n , see Evaluation o f fit S tan d ard erro rs, 202, 203, 213-215 State space m odels, 179-183. 192. 193 Stationaritv, 162, 175-179 S tru ctu ral eq u a tio n m odeling, 1-2 d efin e d , 1 S tru ctu ral m odel, 140

K K enny-judd m odel, 57-87

L L agged covariances, 161 L im ited-inform ation estim ators, 4, 227-240 Listwisc d eletio n , 4, 244. 245, 248, 249 L ogit m odel, 100-105

T T im e series m odels, 4, 159-193 Two-stage least squares, see E stim ator

V V ariance c o m p o n en ts m odel. 94-96

About the Authors

Lynda Aiinan-Smith is a doctoral student in Organizational Behavior and H um an Resources at Purdue University. She lias wor ked for many years in industry in production control, quality assurance, and as a m anufactur­ ing plant manager. Jam es I.. Arbuckle is Associate Professor in the Psychology D epartm ent at Tem ple University. He does research in structural equation modeling. John R. Balia is Senior Lecturer in ihe Faculty of Health Sciences at the University of Sydney in Australia. He received his PhD in Education from Macquaric University in 1989. His areas of interest include psychometrics, students’ learning processes, the optimal use of inform ation technology, evaluation, and the derivation of indicators of quality of various aspects of leaching, learning, and administration. P eter M. Bender is Professor of Psychology at the University o f California, Los Angeles. His research deals with theoretical and statistical problems in psychometrics and multivariate analysis, especially structural equation models, as well as with personality and applied social psychology, especially drug use and abuse. Kenneth A. Bollen is Professor of Sociology at the University of Nor th Carolina at Chapel Hill. Hi.s major research interests are in sociometrics (especially structural equation modeling) and in international dcvelop363

364

ABOUT THF. AUTHORS

uieiit. li e is au th o r o f Structural Equations With Latent Variables (1989, NY: Wiley) and is co-edilor wilh J. S. L ong o f Testing Structural Equation Models (1993, Newbury, CA: Sage). H am parsum Bozdogan is Associate Professor o f Statistics and A djunct As­ sociate P rofessor o f M athem atics at th e U niversity of T en n essee in Knoxville. H e received his doctorate in Statistics from the D epartm ent of M athem atics at th e University o f Illinois in Chicago. He was on the faculty at the University o f Virginia, and a Visiting Professor and Research Fellow at the Institute o f Statistical M athem atics in Tokyo, Japan. He is an inter­ nationally renow ned expert in the area o f inform ational statistical m odel­ ing. He is the recipient of many distinguished awards, on e of which is the prestigious C hancellor’s 1993 Award for Research an d Creative Achieve­ m ent at the University of T ennessee in Knoxville. He is Editor o f the three-volume Proceedings of the First U .S ./Japan C onference on Frontiers of Statistical Modeling: An Informational Approach. Sherry E. C om eal is Assistant Professor in the D epartm ent c>f H um an D evelopm ent an d Family Studies at T h e Pennsylvania State University. H er research interests include single subject designs an d stepfamily life. Fumiaki H am agam i is a research associate at the L. L. T hu rsto n e Psy­ chom etric Laboratory o f the University o f N orth Carolina at C hapel Hill. His research interests focus on individual differences in h u m an cognitive abilities and structural equation models. Kit-Tai H au is L ecturer in the Faculty o f Education at the Chinese U ni­ versity o f H ong Kong. He received his PhD in Psychology from the U ni­ versity of H ong Kong in 1992. His areas o f interest include developm ental psycholog)', m oral developm ent, achievem ent motivation, causal attribu­ tions, and self-concept. Scott L. H ershberger is Assistant Professor o f Quantitative Psychology in the D epartm ent o f Psychology at the University o f Kansas. His research interests include structural equation m odeling, psychometric theory, and developm ental behavior genetics. Karl G. Joreskog is Professor o f M ultivariate Statistical Analysis at Uppsala University, Sweden. His m ain interests arc in the theory an d applications of structural equation models and o th e r types o f m ultivariate analysis, particularly their applications in the social and behavioral sciences. H e is coauthor o f LISREL 7—A Guide to the Program and Applications published by SPSS, 1989, and IJSREL 8: Structural Equation Modeling Wilh the SIMPLJS Command Language published by SSI, 1993.

ABOUT THE AUTHORS

365

George A. Marcoulides is Professor of Statistics at California State Univer­ sity, Fullerton, and Adjunct Professor at the University of California, Irvine. He is the rccipicnt of the 1991 UCEA William J. Davis Memorial Award for outstanding scholarship. H e is currently president-elect o f the Western Decision Sciences Institute, Review Editor of Structural Etjualion .Modeling, and Associate Editor of The InternationalJournal of Educational Management His research interests include generalizability theory and structural equa­ tion modeling. H erbert W. Marsh is the Research Professor of Education at the University of Western Sydney-Macarthur in Australia. He received his PhD in psychol­ ogy from UCLA in 1974. His research spans a broad array of m ethodo­ logical and substantive concerns (students’ evaluations of teaching effec­ tiveness, self-concept, school effectiveness, g en d er differences, sports psychology), and he has published widely in these areas. He is the author of psychological instrum ents including multidimensional measures o f selfconcept (the SDQs) and students’ evaluations of university teaching (SKEQ). John J, McArdle is a professor in the D epartm ent o f Psychology al the University of Virginia. His research interests include structural equation m odeling and individual differences.

Peter C. M. Molenaar graduated from the University of Utrccht, The Neth­ erlands, in mathematical psychology, psychophysiology, and time series analysis. He is currently at the University of Amsterdam and The Pennsyl­ vania State University. He has published in the following areas: state-space m odeling of developmental processes, behavior genetics, and psychophysiological signal analysis. His current interests include nonlinear dynamical approaches to cpigcnctics and biophysics. Aline G. Sayer is Assistant Professor of H um an Development and an As­ sociate of the Center for the Development and Health Research M ethod ology, both at The Pennsylvania State University. She received an EdD in H um an Development from Harvard University and was recently a Postdoc­ toral Fellow in Psychiatry at Harvard Medical School. H er research interests include the integration of individual growth curve modeling and structural equation modeling, and the psychosocial and cognitive development of children with chronic illnesses. Randal] F,. Schumackcr is Associate Professor of Educational Research at the University of North Texas, where he teaches structural equation m od­ eling, psychomctric theories, and statistics. He received his doctorate in Educational Psychology from Southern Illinois University, where he spe­

366

ABOUT THE AUTHORS

cialized in m easurem ent, statistics, an d research m ethods. He is currently E ditor of Structural Equation Modeling and o n the editorial board o f several measurem ent, an d statistics jo u rn als. Jo h n B. Willett is a professor at the H arvard University’ G raduate School o f Education, w here he teaches courses in applied statistics an d data analy­ sis. His research interests focus on the developm ent and explication o f m ethods for the analysis of longitudinal data, including the use o f covari­ ance structure analysis in the m easurem ent o f change and the use of discrete-time survival analysis for investigating the occurrence and tim ing of critical events. Along with his colleague, Judy Singer, he was awarded the 1992 Raymond B. Cattell Early Career Award an d the 1993 Review oj Research Award, both given by the A m erican E ducational Rcscarch Association. Larry J . Williams is Associate Professor and Jay Ross Young Faculty Scholar at the K rantiert G raduate School o f M anagem ent at P urdue University. I le received his doctorate from the Indiana University School o f Business in 1988. H e is th e Consulting E ditor for the Research M ethods an d Analysis section o f the Journal o f Management, and is Co-Editor o f the series, Advances in Research Methods and Analysis for Organizational Studies. His research in­ terests include organizational behavior, m ethod variance, an d structural equation m odeling techniques. W erner W othke is P resident of SmallWaters C orporation. He received his doctorate in m ethodology in behavioral scicnccs from the University of C hicago in 1984 an d then served for 9 years as Vice P resident o f T echnical O perations al Scientific Software, Inc. In 1993, Dr. W othke cofounded SmallWaters C orporation to publish and prom ote innovative statistical soft­ ware. His research interests are in m ultivariate m odel building an d statis­ tical com puting. Fan Yang is a PhD student in statistics at U ppsala University working on a dissertation with n o nlinear structural equation models. Yiu-Fai Yung is Assistant Professor at the L. L. T h urstonc Psychomctric la b o ra to ry o f the University o f N orth Carolina at C hapel Hill. H e received his PhD in Psychology at UCLA in 1994, with his dissertation entitled Finite Mixtures in Confirmatory Factor-Analytic Models. His cu rrcn t research interests are subsam pling m ethods and m ixture m odels in structural equation m od­ eling.