Discusses and demonstrates various SEM models using both cross-sectional and longitudinal data with both continuous and

*1,169*
*124*
*11MB*

*English*
*Pages 0
[537]*
*Year 2020*

- Author / Uploaded
- Wang
- Jichuan; Wang
- Xiaoqian

*Table of contents : Confirmatory factor analysis --Structural equation model --Latent growth models (LGM) for longitudinal data analysis --Multi-group modeling --Mixture modeling --Sample size for structural equation modeling.*

Structural Equation Modeling

WILEY SERIES IN PROBABILITY AND STATISTICS Established by WALTER A. SHEWHART and SAMUEL S. WILKS Editors David J. Balding, Noel A.C. Cressie, Garrett M. Fitzmaurice, Harvey Goldstein, Geert Molenberghs, David W. Scott, Adrian F.M. Smith, and Ruey S. Tsay Editors Emeriti Vic Barnett, Ralph A. Bradley, J. Stuart Hunter, J.B. Kadane, David G. Kendall, and Jozef L. Teugels A complete list of the titles in this series appears at the end of this volume.

Structural Equation Modeling Applications Using Mplus Second Edition

Jichuan Wang George Washington University, United States

Xiaoqian Wang Mobley Group Pacific Ltd. P.R. China

This edition first published 2020 © 2020 John Wiley & Sons Ltd Edition History John Wiley & Sons (1e, 2012) All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. The right of Jichuan Wang and Xiaoqian Wang to be identified as the authors of this work has been asserted in accordance with law. Registered Offices John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK Editorial Office 9600 Garsington Road, Oxford, OX4 2DQ, UK For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats. Limit of Liability/Disclaimer of Warranty While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data applied for ISBN: 9781119422709

Cover Design: Wiley Cover Image: © Jichuan Wang Graph Design Set in 10/12pt, TimesLTStd by SPi Global, Chennai, India

10 9 8 7 6 5 4 3 2 1

Contents Preface

ix

1 Introduction to structural equation modeling

1

1.1 1.2

Introduction Model formulation 1.2.1 Measurement models 1.2.2 Structural models 1.2.3 Model formulation in equations 1.3 Model identification 1.4 Model estimation 1.4.1 Bayes estimator 1.5 Model fit evaluation 1.5.1 The model 𝜒 2 statistic 1.5.2 Comparative fit index (CFI) 1.5.3 Tucker Lewis index (TLI) or non-normed fit index (NNFI) 1.5.4 Root mean square error of approximation (RMSEA) 1.5.5 Root mean-square residual (RMR), standardized RMR (SRMR), and weighted RMR (WRMR) 1.5.6 Information criteria indices 1.5.7 Model fit evaluation with Bayes estimator 1.5.8 Model comparison 1.6 Model modification 1.7 Computer programs for SEM Appendix 1.A Expressing variances and covariances among observed variables as functions of model parameters Appendix 1.B Maximum likelihood function for SEM 2 Confirmatory factor analysis 2.1 2.2

Introduction Basics of CFA models 2.2.1 Latent variables/factors 2.2.2 Indicator variables 2.2.3 Item parceling 2.2.4 Factor loadings

1 3 4 6 7 11 14 17 19 20 20 21 22 22 24 25 26 27 28 30 32 33 33 34 39 39 40 42

vi

CONTENTS

2.2.5 Measurement errors 2.2.6 Item reliability 2.2.7 Scale reliability 2.3 CFA models with continuous indicators 2.3.1 Alternative methods for factor scaling 2.3.2 Model estimated item reliability 2.3.3 Model modification based on modification indices 2.3.4 Model estimated scale reliability 2.3.5 Item parceling 2.4 CFA models with non-normal and censored continuous indicators 2.4.1 Testing non-normality 2.4.2 CFA models with non-normal indicators 2.4.3 CFA models with censored data 2.5 CFA models with categorical indicators 2.5.1 CFA models with binary indicators 2.5.2 CFA models with ordinal categorical indicators 2.6 The item response theory (IRT) model and the graded response model (GRM) 2.6.1 The item response theory (IRT) model 2.6.2 The graded response model (GRM) 2.7 Higher-order CFA models 2.8 Bifactor models 2.9 Bayesian CFA models 2.10 Plausible values of latent variables Appendix 2.A BSI-18 instrument

77 77 86 91 96 102 110 113

Appendix 2.B Item reliability

114

Appendix 2.C Cronbach’s alpha coefficient

116

Appendix 2.D Calculating probabilities using probit regression coefficients

117

3 Structural equation models 3.1 3.2

3.3 3.4 3.5 3.6

3.7

Introduction Multiple indicators, multiple causes (MIMIC) model 3.2.1 Interaction effects between covariates 3.2.2 Differential item functioning (DIF) General structural equation models 3.3.1 Testing indirect effects Correcting for measurement error in single indicator variables Testing interactions involving latent variables Moderated mediating effect models 3.6.1 Bootstrap confidence intervals 3.6.2 Estimating counterfactual-based causal effects in Mplus Using plausible values of latent variables in secondary analysis

42 44 44 45 52 57 57 58 60 61 61 62 67 70 72 76

119 119 120 126 127 137 141 144 150 153 159 160 164

CONTENTS

vii

3.8 Bayesian structural equation modeling (BSEM) Appendix 3.A Influence of measurement errors

167 173

Appendix 3.B Fraction of missing information (FMI)

175

4 Latent growth modeling (LGM) for longitudinal data analysis 4.1 4.2

4.3

4.4 4.5 4.6 4.7 4.8

Introduction Linear LGM 4.2.1 Unconditional linear LGM 4.2.2 LGM with time-invariant covariates 4.2.3 LGM with time-invariant and time-varying covariates Nonlinear LGM 4.3.1 LGM with polynomial time functions 4.3.2 Piecewise LGM 4.3.3 Free time scores 4.3.4 LGM with distal outcomes Multiprocess LGM Two-part LGM LGM with categorical outcomes LGM with individually varying times of observation Dynamic structural equation modeling (DSEM) 4.8.1 DSEM using observed centering for covariates 4.8.2 Residual DSEM (RDSEM) using observed centering for covariates 4.8.3 Residual DSEM (RDSEM) using latent variable centering for covariates

5 Multigroup modeling 5.1 5.2

5.3

5.4

Introduction Multigroup CFA models 5.2.1 Multigroup first-order CFA 5.2.2 Multigroup second-order CFA 5.2.3 Multigroup CFA with categorical indicators Multigroup SEM 5.3.1 Testing invariance of structural path coefficients across groups 5.3.2 Testing invariance of indirect effects across groups Multigroup latent growth modeling (LGM) 5.4.1 Testing invariance of the growth function 5.4.2 Testing invariance of latent growth factor means

6 Mixture modeling 6.1 6.2

Introduction Latent class analysis (LCA) modeling 6.2.1 Description of LCA models

177 177 178 178 184 189 192 192 203 210 211 216 221 229 238 241 241 245 248 253 253 254 258 289 306 316 322 326 327 332 335 339 339 340 341

viii

CONTENTS

6.2.2 6.2.3 6.2.4 6.2.5 6.2.6 6.2.7

Defining the latent classes Predicting class membership Unconditional LCA Directly including covariates into LCA models Approaches for auxiliary variables in LCA models Implementing the PC, three-step, Lanza’s, and BCH methods 6.2.8 LCA with residual covariances 6.3 Extending LCA to longitudinal data analysis 6.3.1 Longitudinal latent class analysis (LLCA) 6.3.2 Latent transition analysis (LTA) models 6.4 Growth mixture modeling (GMM) 6.4.1 Unconditional growth mixture modeling (GMM) 6.4.2 GMM with covariates and a distal outcome 6.5 Factor mixture modeling (FMM) 6.5.1 LCFA models Appendix 6.A Including covariates in LTA model

365 370 373 373 375 392 394 402 411 417 418

Appendix 6.B Manually implementing three-step mixture modeling

434

7 Sample size for structural equation modeling 7.1 7.2 7.3

7.4

7.5

7.6

Introduction The rules of thumb for sample size in SEM The Satorra-Saris method for estimating sample size 7.3.1 Application of The Satorra-Saris method to CFA models 7.3.2 Application of the Satorra-Saris’s method to latent growth models Monte Carlo simulation for estimating sample sizes 7.4.1 Application of a Monte Carlo simulation to CFA models 7.4.2 Application of a Monte Carlo simulation to latent growth models 7.4.3 Application of a Monte Carlo simulation to latent growth models with covariates 7.4.4 Application of a Monte Carlo simulation to latent growth models with missing values Estimate sample size for SEM based on model fit indexes 7.5.1 Application of the MacCallum–Browne–Sugawara’s method 7.5.2 Application of Kim’s method Estimate sample sizes for latent class analysis (LCA) model

347 347 348 360 363

443 443 444 445 446 454 458 459 463 467 469 473 474 477 479

References

483

Index

507

Preface The first edition of this book was one of a few books that provide detailed instruction for how to commonly fit structural equation models using Mplus – a popular software program for latent variable models. The intent of the book is to provide a resource for learning how to practically conduct structural equation modeling (SEM) using Mplus in real research and a reference guide for structural equation models. Since the first publication of the book in 2012, Mplus has undergone several major version updates (from Edition 6.12 to Edition 8.2). The updates have added many new features, including some of the latest development in SEM. In the current edition of the book, we expand the first edition to cover more structural equation models, including some that are newly developed. All of the example Mplus programs have also been updated using Mplus 8.2. The following are the updates by chapter. Chapter 1. Descriptions of the Bayes estimator and corresponding model fit evaluation are included. Chapter 2. New topics/models are added, including: effect coding for factor scale, two-parameter logistic (2PL) item response theory (IRT) models, two-parameter normal ogive (2PNO) IRT models, two-parameter logistic form of graded response models (2PL GRM), two-parameter normal ogive form of graded response models (2PNO GRM), bifactor confirmatory factor analysis (CFA) models, and Bayesian CFA models, as well as approach for estimating plausible values of latent variables. Chapter 3. Added topics include the moderate mediating effect model, bootstrapp approach for SEM, using plausible values of latent variables in secondary analysis, and Bayesian SEM. Chapter 4. The new topics/models added in this chapter include: latent growth modeling (LGM) with individually varying times of observations, and dynamic structural equation modeling (DSEM). The former addresses the challenge in longitudinal data in which assessment times and specific time intervals between measurement occasions vary by individuals. The latter is a recently developed model for time series analysis in a SEM framework on intensive longitudinal data (ILD). Both DSEM and its variant – residual DSEM (RDSEM) – with and without a distal outcome are demonstrated using simulated data. Chapter 5. Added topics include multigroup CFA with categorical indicators to evaluate measurement invariance of scales with categorical indicator variables. Chapter 6. Several topics/models are added in this chapter, including: various auxiliary variable approaches, such as the pseudo-class (PC) method, the three-step method (automatic implementation and manual implementation), Lanza’s method, and the BCH method (automatic implementation and manual implementation); latent

x

PREFACE

class analysis (LCA) with residual covariances; and the longitudinal latent class analysis (LLCA) model, which extends LCA to longitudinal data analysis. Chapter 7. Power analysis and sample size estimation for LCA with dichotomous items are added. The book covers the basic concepts, methods, and applications of SEM, including some recently developed advanced structural equation models. Written in non-mathematical terms, a variety of structural equation models for studying both cross-sectional and longitudinal data are discussed. The book provides step-by-step instructions for model specification, estimation, evaluation, and modification for SEM practice, and thus it is very practical. Examples of various structural equation models are demonstrated using real-world research data. The internationally well-known computer program Mplus 8.2 is used for model demonstrations, and Mplus program syntax is provided for each example model. While the data sets used for the example models in the book are drawn from public health studies, the methods and analytical methods are applicable to all fields of quantitative social studies. Data sets are available from the first author of the book upon request. The target readership of the book is teachers, graduate students, and researchers who are interested in understanding the basic ideas, theoretical frameworks, and methods of SEM, as well as how to implement various structural equation models using Mplus. The book can be used as a resource for learning SEM and a reference guide for conducting SEM using Mplus. The original version of the book has also been used by professors as a textbook at universities both within and outside the United States. Many researchers and graduate students have found the book helpful for learning and practicing SEM. We believe the second edition of the book will better serve as a useful reference for SEM with Mplus. Readers are encouraged to contact the first author at [email protected] or [email protected] with regard to feedback, suggestions, and questions.

1

Introduction to structural equation modeling 1.1 Introduction The origins of structural equation modeling (SEM) stem from factor analysis (Spearman 1904; Tucker 1955) and path analysis (or simultaneous equations) (Wright 1918, 1921, 1934). Integrating the measurement (factor analysis) and structural (path analysis) approaches produces a more generalized analytical framework, called a structural equation model (Jöreskog 1967, 1969, 1973; Keesling 1972; Wiley 1973). In SEM, unobservable latent variables (constructs or factors) are estimated from observed indicator variables, and the focus is on estimation of the relations among the latent variables free of the influence of measurement errors (Jöreskog 1973; Jöreskog and Sörbom 1979; Bentler 1980, 1983; Bollen 1989). SEM provides a mechanism for taking into account measurement error in the observed variables involved in a model. In social sciences, some constructs, such as intelligence, ability, trust, self-esteem, motivation, success, ambition, prejudice, alienation, conservatism, and so on cannot be directly observed. They are essentially hypothetical constructs or concepts, for which there exists no operational method for direct measurement. Researchers can only find some observed measures that are indicators of a latent variable. The observed indicators of a latent variable usually contain sizable measurement errors. Even for variables that can be directly measured, measurement errors are always a concern in statistical analysis. Traditional statistical methods (e.g. multiple regressions, analysis of variance (ANOVA), path analysis, simultaneous equations) ignore the potential measurement error of variables included in a model. If an independent variable in a multiple regression model has measurement error, then the model residuals would be correlated with Structural Equation Modeling: Applications Using Mplus, Second Edition. Jichuan Wang and Xiaoqian Wang. © 2020 John Wiley & Sons Ltd. Published 2020 by John Wiley & Sons Ltd.

2

STRUCTURAL EQUATION MODELING

this independent variable, leading to violation of the basic statistical assumption. As a result, the parameter estimates of the regression model would be biased and result in incorrect conclusions. SEM provides a flexible and powerful means of simultaneously assessing the quality of measurement and examining causal relationships among constructs. That is, it offers an opportunity to construct the unobserved latent variables and estimate the relationships among the latent variables that are uncontaminated by measurement errors. Other advantages of SEM include, but are not limited to, the ability to model multiple dependent variables simultaneously; the ability to test overall model fit, direct and indirect effects, complex and specific hypotheses, and parameter invariance across multiple between-subjects groups; the ability to handle difficult data (e.g. time series with autocorrelated error, non-normal, censored, count, and categorical outcomes); and the ability to combine person-centered and variable-centered analytical approaches. The related topics on these model features will be discussed in the following chapters of this book. This chapter gives a brief introduction to SEM through five steps that characterize most SEM applications (Bollen and Long 1993): 1. Model formulation (Section 1.2). This refers to correctly specifying the structural equation model that the researcher wants to test. The model may be formulated on the basis of theory or empirical findings. A general structural equation model is formed of two parts: the measurement model and the structural model. 2. Model identification (Section 1.3). This step determines whether there is a unique solution for all the free parameters in the specified model. Model estimation cannot be implemented if a model is not identified, and model estimation may not converge or reach a solution if the model is misspecified. 3. Model estimation (Section 1.4). This step estimates model parameters and generates a fitting function. Various estimation methods are available for SEM. The most common method for structural equation model estimation is maximum likelihood (ML). 4. Model evaluation (Section 1.5). After meaningful model parameter estimates are obtained, the researcher needs to assess whether the model fits the data. If the model fits the data well and the results are interpretable, then the modeling process can stop after this step. 5. Model modification (Section 1.6). If the model does not fit the data, re-specification or modification of the model is needed. In this instance, the researcher makes a decision regarding how to delete, add, or modify parameters in the model. The fit of the model could be improved through parameter re-specification. Once a model is re-specified, steps 1 through 4 may be carried out again. The model modification may be repeated more than once in real research. In the following sections, we will introduce the SEM process step by step. Finally, Section 1.7 provides a list of computer programs available for SEM.

INTRODUCTION TO STRUCTURAL EQUATION MODELING

3

1.2 Model formulation In SEM, researchers begin with the specification of a model to be estimated. There are different approaches to specify a model of interest. The most intuitive way of doing this is to describe one’s model using path diagrams, as first suggested by Wright (1934). Path diagrams are fundamental to SEM since they allow researchers to formulate the model of interest in a direct and appealing fashion. The diagram provides a useful guide for clarifying a researcher’s ideas about the relationships among variables and can be directly translated into corresponding equations for modeling. Several conventions are used in developing a structural equation model path diagram, in which the observed variables (also referred to as measured variables, manifest variables, or indicators) are presented in boxes, and latent variables or factors are in circles or ovals. Relationships between variables are indicated by lines; the lack of a line connecting variables implies that no direct relationship has been hypothesized between the corresponding variables. A line with a single arrow represents a hypothesized direct relationship between two variables, with the head of the arrow pointing toward the variable being influenced by another variable. The bi-directional arrows refer to relationships or associations, instead of effects, between variables. An example of a hypothesized general structural equation model is specified in the path diagram shown in Figure 1.1. As mentioned previously, the latent variables are enclosed in ovals and the observed variables are in boxes in the path diagram. The measurement of a latent variable or a factor is accomplished through one or more observable indicators, such as responses to questionnaire items that are assumed to represent the latent variable. In our model, two observed variables (x1 and x2 ) are used as indicators of the latent variable 𝜉 1 , three indicators (x3 –x5 ) for latent variable 𝜉 2 , and three (y1 −y3 ) for latent variable η1 . Note that η2 has a single indicator, indicating that the latent variable is directly measured by a single observed variable. ζ1 δ1

x1

λx11

γ11

ξ1 δ2

x2

λx21

x3

δ4

x4

δ5

x5

λy31 β12

γ12

λx32 λx42

λy21

η1

γ21

ϕ12 δ3

λy11

ξ2

η2

λy42

y1

ε1

y2

ε2

y3

ε3

y4

γ22 ζ2

λx52

Figure 1.1

A hypothesized general structural equation model.

ε4

4

STRUCTURAL EQUATION MODELING

The latent variables or factors that are determined by variables within the model are called endogenous latent variables, denoted by η; the latent variables, whose causes lie outside the model, are called exogenous latent variables, denoted by 𝜉. In the example model, there are two exogenous latent variables (𝜉 1 and 𝜉 2 ) and two endogenous latent variables (η1 and η2 ). Indicators of the exogenous latent variables are called exogenous indicators (e.g. x1 −x5 ), and indicators of the endogenous latent variables are endogenous indicators (e.g. y1 −y4 ). The former has a measurement error term symbolized as 𝛿, and the latter has measurement errors symbolized as 𝜀 (see Figure 1.1). The coefficients β and 𝛾 in the path diagram are path coefficients. The first subscript notation of a path coefficient indexes the dependent endogenous variable, and the second subscript notation indexes the causal variable (either endogenous or exogenous). If the causal variable is exogenous (𝜉), the path coefficient is a 𝛾; if the causal variable is another endogenous variable (η), the path coefficient is a β. For example, β12 is the effect of endogenous variable η2 on the endogenous variable η1 ; 𝛾 12 is the effect of the second exogenous variable 𝜉 2 on the first endogenous variable η1 . As in multiple regressions, nothing is predicted perfectly; there are always residuals or errors. The 𝜁 s in the model, pointing toward the endogenous variables, are structural equation residual terms. Different from traditional statistical methods, such as multiple regressions, ANOVA, and path analysis, SEM focuses on latent variables/factors rather than on the observed variables. The basic objective of SEM is to provide a means of estimating the structural relations among the unobserved latent variables of a hypothesized model free of the effects of measurement errors. This objective is fulfilled through integrating a measurement model (confirmatory factor analysis – CFA) and a structural model (structural equations or latent variable model) into the framework of a structural equation model. It can be claimed that a general structural equation model consists of two parts: (i) the measurement model that links observed variables to unobserved latent variables (factors); and (ii) structural equations that link the latent variables to each other via system of simultaneous equations (Jöreskog 1973).

1.2.1

Measurement models

The measurement model is the measurement component of a SEM. An essential purpose of the measurement model is to describe how well the observed indicator variables serve as a measurement instrument for the underlying latent variables or factors. Measurement models are usually carried out and evaluated by CFA. A measurement model or CFA proposes links or relations between the observed indicator variables and the underlying latent variables/factors that they are designed to measure, and then tests them against the data to “confirm” the proposed factorial structure. In the structural equation model specified in Figure 1.1, three measurement models can be considered (see Figure 1.2a–c). In each measurement model, the 𝜆 coefficients, which are called factor loadings in the terminology of factor analysis, are the links between the observed variables and latent variables. For example, in Figure 1.2a, the observed variables x1 −x5 are linked through 𝜆x11 − 𝜆x52 to latent variables 𝜉 1

INTRODUCTION TO STRUCTURAL EQUATION MODELING

δ1

x1

λx11

λy11

ξ1 δ2

x2

λx21

η1

λy31

ϕ12 δ3

x3

δ4

x4

δ5

x5

ε1

y2

ε2

y3

ε3

y4

ε4

(b) λx32 λx42

ξ2

η2

λy42

λx52 (a)

Figure 1.2

λy21

y1

5

(c)

(a) Measurement model 1; (b) measurement model 2; (c) measurement model 3.

and 𝜉 2 , respectively. In Figure 1.2b, the observed variables y1 −y3 are linked through 𝜆y11 − 𝜆y31 to latent variable η1 . Note that Figure 1.2c can be considered a special CFA model with a single factor η2 and a single indicator y4 . Of course, this model cannot be estimated separately because it is unidentified. We will discuss this issue later. Factor loadings in CFA models are usually denoted by the Greek letter 𝜆. The first subscript notation of a factor loading indexes the indicator, and the second subscript notation indexes the corresponding latent variable. For example, 𝜆x21 represents the factor loading linking indicator x2 to exogenous latent variable 𝜉 1 ; and𝜆y31 represents the factor loading linking indicator y3 to endogenous latent variable η1 . In the measurement model shown in Figure 1.2a, there are two latent variables/factors, 𝜉 1 and 𝜉 2 , each of which is measured by a set of observed indicators. Observed variables x1 and x2 are indicators of the latent variable 𝜉 1 , and x3 −x5 are indicators of 𝜉 2 . The two latent variables, 𝜉 1 and 𝜉 2 , in this measurement mode are correlated with each other (𝜙12 in Figure 1.2a stands for the covariance between 𝜉 1 and 𝜉 2 ), but no directional or causal relationship is assumed between the two latent variables. If these two latent variables were not correlated with each other (i.e. 𝜙12 = 0), there would be a separate measurement model for 𝜉 1 and 𝜉 2 , respectively, where the measurement model for 𝜉 1 would have only two observed indicators; thus it would not be identified. For a one-factor solution CFA model, a minimum of three indicators is required for model identification. If no errors are correlated, a one-factor CFA model with three indicators (e.g. the measurement model shown in Figure 1.2b) is just identified (i.e. when the variance/covariance structure is analyzed, the number of observed variances/covariances equals the number of free parameters).1 In such a case, model fit 1 For a one-factor CFA model with three indicators, there are 3(3 + 1)/2 = 6 observed variances/covariances, and six free parameters: two factor loadings (one loading is fixed to 1.0),

6

STRUCTURAL EQUATION MODELING

cannot be assessed, although model parameters can be estimated. In order to assess model fit, the model must be over-identified (i.e. the observed pieces of information are more than model parameters that need to be estimated). Without specifying error correlations, a one-factor solution CFA model needs four or more indicators in order to be over-identified. However, a factor with only two indicators may be acceptable if the factor is specified to be correlated with at least one other factor in a CFA and no error terms are correlated with each other (Bollen 1989; Brown 2015). The measurement model shown in Figure 1.2a is over-identified, although factor 𝜉 1 has only two indicators. Nonetheless, multiple indicators need to be considered to represent the underlying construct more completely since different indicators can reflect non-overlapping aspects of the underlying construct. Figure 1.2c shows a simple measurement model. For some single observed indicator variables, which are less likely to have measurement errors (e.g. gender, ethnicity), the simple measurement model would become similar to y4 = η2 , where factor loading 𝜆y42 is set to 1.0 and measurement error 𝜀4 is 0.0. That is, the observed variable y4 is a “perfect” measure of construct η2 . If the single indicator is not a perfect measure, measurement error cannot be modeled; rather, one must specify a fixed measurement error variance based on a known reliability estimate of the indicator (Hayduk 1987; Wang et al. 1995). This issue will be discussed in Chapter 3.

1.2.2

Structural models

Once latent variables/factors have been assessed in the measurement models, the potential relationships among the latent variables are hypothesized and assessed in another part of SEM: the structural model (structural equations or latent variable model) (see Figure 1.3). The path coefficients 𝛾 11 , 𝛾 12 , 𝛾 21 , and 𝛾 22 specify the effects of the exogenous latent variables 𝜉 1 and 𝜉 2 on the endogenous latent variables η1 and

γ11

ξ1

η1

γ21 β12 γ12

ξ2

Figure 1.3

η2 γ22 ζ2 Structural model.

one variance of the factor, and three variances of the error terms; thus degrees of freedom (df) = 0.

INTRODUCTION TO STRUCTURAL EQUATION MODELING

7

η2 , while β12 specifies the effect of η2 on η1 ; that is, the structural model defines the relationships among the latent variables, and it is estimated simultaneously with the measurement models. Note that if the variables in a structural model were all observed variables, rather than latent variables, the structural model would become a modeling systems of structural relationships among a set of observed variables; thus, the model reduces to the traditional path analysis in sociology or simultaneous equations model in econometrics. The model shown in Figure 1.3 is a recursive model. If the model allows for reciprocal or feedback effects (e.g. η1 and η2 influence each other), then the model is called a nonrecursive model. Only applications of recursive models will be discussed in this book. Readers who are interested in nonrecursive models are referred to Berry (1984) and Bollen (1989).

1.2.3

Model formulation in equations

Traditionally, SEM is estimated based on analysis of covariance structure (COVS), in which variables are measured as deviations from the means of the original observed measures. In other words, the intercepts of indicator variables and means of latent variables are all set to 0. This data transformation helps simplify model specification and calculation, and does not affect parameter estimation. However, when factor means and indicator intercepts (or thresholds of categorical outcomes) are concerned in modeling, such as in multi-group SEM (which will be discussed in Chapter 5), the mean and covariance structures (MACS) should be analyzed (also called analysis of moment structures). For the purpose of simplicity, here we discuss model formulation based on COVS. The general structural equation model can be expressed by three basic equations: η = Bη + Γ𝜉 + 𝜁 Y = Λy η + 𝜀 X = Λx 𝜉 + 𝛿

(1.1)

These three equations are expressed in matrix format.2 Definitions of the variable matrixes involved in the three equations are shown in Table 1.1. The first equation in Eq. (1.1) represents the structural model that establishes the relationships or structural equations among latent variables. The components of η are endogenous latent variables, and the components of 𝜉 are exogenous latent variables. The endogenous and exogenous latent variables are connected by a system of linear equations with coefficient matrices B (beta) and Γ (gamma), as well as a residual vector 𝜁 (zeta), where Γ represents effects of exogenous latent variables on endogenous latent variables, B represents effects of some endogenous latent variables on other endogenous latent variables, and 𝜁 represents the regression residual terms. 2 For simplicity, deviations from means of the original observed variables are usually used in structural equation model specification.

8

STRUCTURAL EQUATION MODELING Table 1.1 Definitions of the variable matrixes in the three basic equations of a general structural equation model. Variable

Definition

Dimension

η (eta) 𝜉 (xi) 𝜁 (zeta) y x 𝜀 (epsilon) 𝛿 (delta)

Latent endogenous variable Latent exogenous variable Residual term in equations Endogenous indicators Exogenous indicators Measurement errors of y Measurement errors of x

m×1 n×1 m×1 p×1 q×1 p×1 q×1

Note: m and n represent the number of latent endogenous and exogenous latent variables, respectively; p and q are the number of endogenous and exogenous indicators, respectively, in the sample.

The second and third equations in Eq. (1.1) represent measurement models that define the latent variables from the observed variables. The second equation links the endogenous indicators – the observed y variables – to endogenous latent variables (i.e. η’s), while the third equation links the exogenous indicators – the observed x variables – to the exogenous latent variables (i.e. 𝜉’s). The observed variables (y’s and x’s) are related to corresponding latent variables η and 𝜉 by factor loadings Λy (lambda y) and Λx . The 𝜀 and 𝛿 are the measurement errors associated with the observed variables y and x, respectively. It is assumed that E(𝜀) = 0, E(𝛿) = 0, Cov(𝜀, 𝜉) = 0, Cov(𝜀, η) = 0, Cov(𝛿, η) = 0, Cov(𝛿, 𝜉) = 0, and Cov(𝜀, 𝛿) = 0, but Cov(𝜀i , 𝜀j ) and Cov(ηi , ηj ) (i ≠ j) might not be 0. In the three basic equations shown in Eq. (1.1), there are a total of eight fundamental matrices in LISREL notation:3 Λx , Λy , Γ, B, Φ, Ψ, Θ𝛿 , and Θ𝛿 (Jöreskog and Sörbom 1981). A structural equation model is fully defined by the specification of the structure of the eight matrices. In the early ages of SEM, a structural equation model was specified in matrix format using the eight-parameter matrix. Although this is no longer the case in the existing SEM programs/software, information about parameter estimates in the parameter matrices is reported in output of Mplus and other SEM computer programs. Understanding these notations is helpful for researchers to check the estimates of specific parameters in the output. A summary of these matrices and vectors is presented in Table 1.2. The first two matrices, Λy and Λx , are factor-loading matrices that link the observed indicators to the latent variables η and 𝜉, respectively. The next two matrices, B (beta) and Γ (gamma), are structural coefficient matrices. The B matrix is an m × m coefficient matrix representing the relationships among latent endogenous variables. The model assumes that (I − B) must be nonsingular: thus, (I − B)−1 exists so that model estimation can be done. A 0 in the B matrix indicates the absence of an effect of one 3 LISREL, which stands for linear structural relationship, is the first computer software for SEM, written by Drs. Karl Jöreskog and Dag Sörbom of the University of Uppsala, Sweden. SEM was called LISREL modeling for many years.

INTRODUCTION TO STRUCTURAL EQUATION MODELING

9

Table 1.2 Eight fundamental parameter matrices for a general structural equation model. Matrix

Definition

Coefficient matrices Factor loadings relating y to η Λy (lambda y) Factor loadings relating x to 𝜉 Λx (lambda x) B (beta) Coefficient matrix relating η to η Γ (gamma) Coefficient matrix relating 𝜉 to η Variance/covariance matrices Φ (phi) Variance/covariance matrices of 𝜉 Ψ (psi) Variance/covariance matrices of 𝜁 Variance/covariance matrices of 𝜀 Θ𝜀 (theta-epsilon) Variance/covariance matrices of 𝛿 Θ𝛿 (theta-delta)

Dimension p×m q×n m×m m×n n×n m×m p×p q×q

Note: p is the number of y variables, q is the number of x variables, n is the number of 𝜉 variables, and m is the number of η variables.

latent endogenous variable on another. For example, η12 = 0 indicates that the latent variable η2 does not have an effect on η1 . Note that the main diagonal of matrix B is always 0; that is, a latent variable η cannot be a predictor of itself. The Γ matrix is an m × n coefficient matrix that relates latent exogenous variables to latent endogenous variables. There are four parameter variance/covariance matrices for a general structural equation model: Φ (phi), Ψ (psi), Θ𝜀 (theta-epsilon), and Θ𝛿 (theta-delta).4 All four variance/covariance matrices are symmetric square matrices; i.e. the number of rows equals the number of columns in each of the matrices. The elements in the main diagonal of each of the matrices are the variances, which should always be positive; the elements in the off-diagonal are covariances of all pairs of variables in the matrices. The n × n matrix Φ is the variance/covariance matrix for the latent exogenous 𝜉 variables. Its off-diagonal element 𝜙ij (i.e. the element in the ith row and jth column in matrix Φ) is the covariance between the latent exogenous variables 𝜉 i and 𝜉 j (i ≠ j). With standardized solution, the matrix becomes a correlation matrix in which the diagonal values are 1, and the off-diagonal values are correlations between exogenous latent variables. If 𝜉 i and 𝜉 j were not hypothesized to be correlated with each other in the model, 𝜙ij = 0 should be set up when specifying the model. The m × m matrix Ψ is the variance/covariance matrix of the residual terms 𝜁 of the structural equations. In simultaneous equations of econometrics, the disturbance terms in different equations are often assumed to be correlated with each other. This kind of correlations can be readily set up in matrix Ψ and estimated in SEM. The last two matrices (i.e. the p × p Θ𝜀 and q × q Θ𝛿 ) are variance/covariance matrices of the measurement errors for the observed y and x variables, respectively. In longitudinal studies, the autocorrelations can be easily handled by correlating specific error terms with each other. 4 The variance/covariance matrices for the latent endogenous variables η need not be estimated from modeling since they can be calculated as follows: Var(η) = Var[(Γ𝜉 + 𝜁 )/(I − B)].

10

STRUCTURAL EQUATION MODELING

The SEM specification actually formulates a set of model parameters contained in the eight matrices. Those parameters can be specified as either fixed or free. Fixed parameters are not estimated from the model, and their values are typically fixed at 0 (i.e. zero covariance or zero slope, indicating no relationship or no effect) or 1.0 (i.e. fixing one of the factor loadings to 1.0 for the purpose of model identification). Free parameters are estimated from the model. The hypothesized model shown in Figure 1.1 can be specified in matrix notation based on the three basic equations. First, the equation η = Bη + Γ𝜉 + 𝜁 can be expressed as: ][ ] [ ][ ] [ ] [ ] [ 0 β12 η1 𝛾 𝛾 𝜉1 𝜁 η1 = + 11 12 + 1 (1.2) η2 0 0 η2 𝛾21 𝛾22 𝜉2 𝜁2 where the free parameters are represented by symbols (i.e. Greek letters). The fixed parameters (i.e. whose values are fixed) represent restrictions on the parameters, according to the model. For example, β21 is fixed to 0, indicating that η2 is not specified to be influenced by η1 in the hypothetical model. The diagonal elements in matrix B are all fixed to 0 as a variable is not supposed to influence itself. The elements in matrix B are the structural coefficients that express endogenous latent variable η as a linear function of other endogenous latent variables; elements in matrix Γ are the structural coefficients that express endogenous variable η as a linear function of exogenous latent variables. From Eq. (1.2), we have the following two structural equations: η1 = β12 η2 + 𝛾11 𝜉1 + 𝛾12 𝜉2 + 𝜁1 η2 = 𝛾21 𝜉1 + 𝛾22 𝜉2 + 𝜁2

(1.3)

The measurement equation Y = Λy η + 𝜀 can be expressed as: ⎡y1 ⎤ ⎡1 ⎢y2 ⎥ ⎢λy21 ⎢y ⎥ = ⎢λ ⎢ 3 ⎥ ⎢ y31 ⎣y4 ⎦ ⎣0

0 ⎤ [ ] ⎡𝜀1 ⎤ ⎢𝜀 ⎥ 0 ⎥ η1 + ⎢ 2⎥ 0 ⎥ η2 𝜀 ⎥ ⎢ 3⎥ ⎣𝜀4 ⎦ λy42 ⎦

(1.4)

where the Λy matrix decides which observed endogenous y indicators are loaded onto which endogenous η latent variables. The fixed value of 0 indicates the corresponding indicators do not load on the corresponding latent variables, while the fixed value of 1 is used for the purpose of defining the scale of the latent variable. We will discuss this issue in detail in Chapter 2. From Eq. (1.4), we have the following four measurement structural equations: y1 = η1 + 𝜀1 y2 = λy21 η1 + 𝜀2 y3 = λy31 η1 + 𝜀3 y4 = λy42 η2 + 𝜀4

(1.5)

INTRODUCTION TO STRUCTURAL EQUATION MODELING

11

As the second endogenous latent variable η2 has only one indicator (i.e. y4 ), 𝜆y42 should be set to 1.0, and thus y4 = η2 + 𝜀4 . It is hard to estimate the measurement error in such an equation in SEM, so the equation is usually set to y4 = η2 , assuming that the latent variable η2 is perfectly measured by the single indicator y4 . However, if the reliability of y4 is known, based on previous studies or estimated from test-retest data, the variance of 𝜀4 in equation y4 = η2 + 𝜀4 can be estimated and specified in the model to take into consideration the effect of measurement errors in y4 . We will demonstrate how to do so in Chapter 3. Another measurement equation X = Λx 𝜉 + 𝛿 can be expressed as: ⎡x1 ⎤ ⎡1 ⎢x2 ⎥ ⎢λx21 ⎢x3 ⎥ = ⎢0 ⎢ ⎥ ⎢ ⎢x4 ⎥ ⎢0 ⎣x5 ⎦ ⎣0

0 ⎤ ⎡δ1 ⎤ 0 ⎥ [ ] ⎢δ2 ⎥ ξ 1 ⎥ 1 + ⎢δ3 ⎥ ⎥ ξ2 ⎢ ⎥ λx42 ⎥ ⎢δ4 ⎥ ⎣δ5 ⎦ λx52 ⎦

(1.6)

Thus, x1 = ξ1 + δ1 x2 = λy21 ξ1 + δ2 x3 = ξ1 + δ3 x4 = λy42 ξ2 + δ4 x5 = λy52 ξ2 + δ5

(1.7)

Among the seven random variable vectors (𝛿, 𝜀, 𝜁 , x, y, 𝜉, and η), x, y, 𝜉, and η are usually used together with the eight parameter matrices to define a structural equation model; the others are error terms or model residuals. It is assumed that E(𝜁 ) = 0, E(𝜀) = 0, E(𝛿) = 0, Cov(𝜁 ,𝜉) = 0, Cov(𝜀,η) = 0, and Cov(𝛿,𝜉) = 0. In addition, multivariate normality is assumed for the observed x and y variables.

1.3 Model identification A fundamental consideration in specifying a structural equation model is model identification. Essentially, model identification concerns whether a unique value for each and every unknown parameter can be estimated from the observed data. For a given free unknown parameter that needs to be model-estimated, if it is not possible to express the parameter algebraically as a function of sample variances/covariances, then that parameter is unidentified. We can get a sense of the problem by considering the example equation Var(y) = Var(η) + Var(𝜀), where Var(y) is the variance of the observed variable y, Var(η) is the variance of the latent variable η, and Var(𝜀) is the variance of the measurement error. There are one known (Var(y)) and two unknowns (Var(η) and Var(𝜀)) in the equation; therefore, there is no unique solution for either Var(η) or Var(𝜀) in this equation. That is, there are an infinite number of combinations of values of Var(η) and Var(𝜀) that would sum to Var(y), thus rendering this single-equation model unidentified.

12

STRUCTURAL EQUATION MODELING

To solve the problem, we need to impose some constraints on the equation. One such constraint might be to fix the value of Var(𝜀) to a constant by adding one more equation Var(𝜀) = C (C is a constant). Then, Var(η) would be ensured to have a unique estimate: Var(η) = Var(y) − C. In other words, the parameter Var(η) in the equation is identified. The same general principles hold for more complicated structural equation models. If an unknown parameter can be expressed by at least one algebraic function of one or more elements of S (the variance-covariance matrix of the observed variables), that parameter is identified. If all the unknown parameters are identified, then the model is identified. Very often, parameters can be expressed by more than one distinct function. In this case, the parameter is over-identified. Over-identification means there is more than one way of estimating a parameter (or parameters) because there is more than enough information for estimating the parameter. However, parameter estimates obtained from different functions should have an identical value in the population when the model is correct (Bollen 1989). A model is over-identified when each parameter is identified and at least one parameter is over-identified. A model is just-identified when each parameter is identified and none is over-identified. The term identified models refers to both just-identified and over-identified models. A not-identified (under-identified or unidentified) model has one or more unidentified parameters. If one or more parameters were unidentified, specifically, for an independent latent variable, the variance of the latent variable, coefficients associated with all paths emitted by the latent variable would not be identified; for a dependent latent variable, the residual variance and coefficients associated with all paths leading to or from the latent variable would not be identified. If a model is under-identified, consistent estimates of all the parameters will not be attainable. Identification is not an issue of sample size. An under-identified model remains under-identified no matter how big the sample size is. For any model to be estimated, it must be either just-identified or over-identified. Over-identified SEM is of primary interest in SEM applications. It refers to a situation where there are fewer parameters in the model than data points (i.e. the number of distinct variances and covariances among the observed variables).5 However, models containing over-identified parameters do not necessarily fit the data, thus creating the possibility of finding whether a model fits the observed data. The difference between the number of observed variances and covariances and the number of free parameters is called the degrees of freedom (df) associated with the model fit. By contrast, a just-identified model has zero df; therefore, its model fit cannot be tested. There is no simple set of necessary and sufficient conditions that provide a means for verifying parameters identification in structural equation models. However, there are two necessary conditions that should always be checked.

5 Data points usually refer to the number of distinct variances and covariances among the observed variables; however, when the MACS is analyzed, the means of the observed variables will be counted in the data points.

INTRODUCTION TO STRUCTURAL EQUATION MODELING

13

First, the number of data points must not be less than the number of free parameters. For analysis of covariance structure (COVS), the number of data points is the number of distinct elements in the observed variance/covariance matrix S; it is equal to (p + q)(p + q + 1)/2, where (p + q) is the total number of observed variables (p is the number of endogenous indicators, and q is the number of exogenous indicators). That is, only the diagonal variances and one set of off-diagonal covariances in matrix S, either above or below the diagonal, are counted. The number of free parameters is the number of path coefficients, variances of latent variables, variances of error terms, covariances between latent variables, and error term covariances that are to be estimated in the model. If there are more data points than free parameters, the model is said to be over-identified. If there are fewer data points than free parameters, the model is said to be under-identified, and parameters cannot be estimated. Second, a latent variable/factor is a hypothetical measure and has no intrinsic scale; thus a measurement scale must be established for every latent variable/factor in the model. To establish the measurement scale of a latent variable, two approaches are usually applied: (i) fix one of the factor loadings (𝜆’s) to 16 ; or (ii) fix the variance of the latent variable to 1. If the variance of the latent variable (i.e. Φ) is free, and if all the factor loadings (𝜆’s) are free, the factor loadings and the variance of the latent variable are not identified. When MACS is analyzed, another approach for factor scaling is the effect coding method, which constrains the factor loadings to average to 1.0 and the item intercepts to sum to 0 (Little et al. 2006; Te Grotenhuis et al. 2017). All three approaches to factor scaling will be demonstrated in the next chapter. The two conditions for model identification are necessary but not sufficient. Identification problems can still arise even if these two conditions are satisfied. Although a rigorous verification of model identification can be achieved algebraically, existing SEM software/programs generally provide a check for identification during model estimation. When a model is not identified, error messages will be printed in the program output, pointing to the parameters that are involved in the identification problem. Using this information, one can modify the model in a meaningful way to eliminate the problem. The best way to solve the identification problem is to avoid it. Usually, one can add more indicators of latent variables so that there would be more data points. However, the primary prevention strategy is to emphasize parameter specification. Model identification actually depends on the specification of parameters as free, fixed, or constrained. A free parameter is a parameter that is unknown and needs to be model-estimated. A fixed parameter is a parameter that is fixed to a specified value. For example, in a measurement model, either one indicator of each latent variable must have a factor loading fixed to 1.0 or the variance of each latent variable should be fixed to 1.0. In addition, some path coefficients or correlations/covariances are fixed to 0 because no such effects or relationships exist in the hypothesized model. A parameter can also be constrained to equal one or more other parameters. Supposing that previous research shows variables x1 and x2 have the same effect on a dependent 6 Most of the existing SEM software/programs set the factor loading of the first observed indicator of a latent variable to 1.0 by default.

14

STRUCTURAL EQUATION MODELING

measure: an equality restriction could be imposed on their path coefficients in the structural equation model. By fixing or constraining some of parameters, the number of free parameters can be reduced; as such, an under-identified model may become identified. In addition, reciprocal or non-recursive SEM is another common source of identification problems. A structural model is non-recursive when a reciprocal or bidirectional relationship is specified so that a feedback loops exists between two variables in the model (e.g. y1 affects y2 on one hand, and y2 affects y1 on the other hand). Such models are generally under-identified. For the y1 (y2 ) equation to be identified, one or more instrumental variables need to directly affect y1 (y2 ) but not y2 (y1 ) (Berry 1984).

1.4 Model estimation Estimation of structural equation models is different from that of multiple regressions. Instead of minimizing the discrepancies between the fitted and observed values of the response variable (i.e. Σ(y − ̂ y)), SEM estimation procedures minimize the residuals, which are differences between the sample variances/covariances and the variances/covariances estimated from the model. Let’s use Σ to denote the population covariance matrix of observed variables y and x. Then Σ can be expressed as a function of free parameters 𝜃 in a hypothesized model (see Appendix 1.A at the end of this chapter). The basic hypothesis in SEM is: Σ = Σ(𝜃)

(1.8)

where Σ(𝜃) is called the implied variance/covariance matrix, which is the variance/covariance matrix implied by the population parameters for the hypothesized model. The purpose of model estimation or model fit is to find a set of model parameters 𝜃 to produce Σ(𝜃) so that (Σ − Σ(𝜃)) can be minimized. The discrepancy between Σ and Σ(𝜃) indicates how well the model fits the data. ̂ or (S − Σ) ̂ is actually minBecause Σ and Σ(𝜃) are unknown, in SEM, (S − Σ(𝜃)) imized where S is the sample variance/covariance matrix, 𝜃̂ are model estimated ̂ is the model-estimated variance/covariance matrix. As mentioned parameters, and Σ earlier, a given theoretical structural equation model is represented by specifying a pattern of fixed and free (estimated) elements in each of the eight parameter matrices. The matrix of observed covariances (S) is used to estimate values for the free parameters in the matrices that best reproduce the data. Given any set of specific numerical ̂ values of the eight model parameter matrices (see Table 1.2), one and only one Σ ̂ will be very close to S. This estimation will be reproduced. If the model is correct, Σ process involves the use of a particular fitting function to minimize the difference ̂ There are many fitting functions or estimation procedures available between S and Σ. for model estimation. The most commonly employed fitting function for SEM is the ML function (see Appendix Eq. (1.5)): ̂ = ln ∣ Σ ̂ ∣ − ln ∣ S ∣ + tr(SΣ ̂−1 ) − (p + q) FML (𝜃)

(1.9)

INTRODUCTION TO STRUCTURAL EQUATION MODELING

15

̂ are the sample and model estimated variance/covariance matrices, where S and Σ respectively, and (p + q) is the number of observed variables involved in the model (yielding (p + q)(p + q + 1)/2 unique variances and covariances). The goal in SEM estimation is to estimate model parameters such that a function ̂ is minimized. FML is a measure of this discrepancy of the discrepancy between S and Σ (a discrepancy function). When a model fits the data perfectly, the model-estimated ̂ = S, variance/covariance matrix equals the sample variance/covariance matrix: i.e. Σ ̂ = 0. That ̂ ∣ = ln ∣ S ∣ and tr(Σ ̂−1 S) = tr(I) = (p + q), therefore FML (𝜃) and then ln ∣ Σ is, a perfect model fit is indicated by a 0 value for the fitting function. The ML has several important properties. (i) ML estimates are unbiased – on average, in large samples, they neither over-estimate nor under-estimate the corresponding population parameters. (ii) ML estimates are consistent − they converge in probability to the true value of the population parameters being estimated as sample size increases. (iii) ML estimates are efficient − they have minimum variance when the sample size is large. (iv) The distribution of the parameter estimate approximates a normal distribution as sample size increases (i.e. ML estimates are asymptotically normally distributed). (v) ML functions are usually scale free – a change in variable ̂ multiscale does not yield different solutions. (vi) The ML fitting function FML (𝜃) 2 plied by (n − 1) approximates a 𝜒 distribution under the assumption of multivariate normality and large sample size, and the model 𝜒 2 can be used for testing overall model fit. Importantly, Mplus ML estimators (e.g. ML, MLR) are implemented in conjunction with full information maximum likelihood (FIML) (sometimes called direct maximum likelihood or raw maximum likelihood). FIML is a modern method of handling missing data that is superior to traditional approaches, such as LISTWISE deletion, PAIRWISE deletion, and similar response-pattern imputation (Enders and Bandalos 2001). FIML uses every piece of information in the outcome measures for model estimation and allows missing data, assuming it is ignorable missing, such as missing completely at random (MCAR) or missing at random (MAR). MAR is a plausible assumption that allows missingness to be dependent on both the observed outcome and covariates (Arbuckle 1996; Little and Rubin 2002). ML estimation in conjunction with FIML provides unbiased estimates even when the missing data is MAR (Arbuckle 1996; Little and Rubin 2002). ML is carried out for continuous outcome measures under a normality assumption. Under conditions of severe non-normality, ML parameter estimates are less likely to be biased but the standard errors of parameter estimates may be biased; and the model 𝜒 2 statistic may be enlarged, leading to an inflated Type I error for model rejection. When non-normality threatens the validity of ML significance tests, several remedies are possible. (i) Researchers may consider transformations of non-normal variables that lead them to better approximate multi-normality. (ii) Researches may remove outliers from data. (iii) Bootstrap procedures may be applied to estimate variances of parameter estimates for significance tests (Efron and Tibshirani 1993; Shipley 2000; Bollen and Stine 1993). (iv) Or, preferably, robust estimators (e.g. MLR, Bayes estimators) that allow for non-normality can be applied.

16

STRUCTURAL EQUATION MODELING

A well-known asymptotically distribution free (ADF) estimator developed by Browne (1982, 1984) does not assume multivariate normality of the observed variables. ADF is a weighted least square estimator. The weight matrix used in ADF is a consistent estimate of the asymptotic variance/covariance matrix of the sample variance/covariance matrix S. The disadvantage of ADF is that it is computationally demanding and requires a large sample size (Jöreskog and Sörbom 1989; Muthén and Kaplan 1992; Bentler and Yuan 1999). Another approach proposed by Satorra and Bentler (1988) is to adjust the ML estimator to account for non-normality. This method provides a rescaled 𝜒 2 statistic, called Satorra-Bentler 𝜒 2 or SB 𝜒 2 , that is robust under non-normality (Boomsma and Hoogland 2001; Hoogland 1999). Bentler and Yuan (1999) proposed an adjusted ADF 𝜒 2 and found it performed well with small sample size. Mplus provides some ML estimators that are robust to data non-normality, such as MLM (ML parameter estimates with standard errors and the mean-adjusted 𝜒 2 test statistic referred to as the Satorra-Bentler 𝜒 2 ), MLMV (ML parameter estimates with standard errors and a mean- and variance-adjusted 𝜒 2 statistic), and MLR (ML sandwich estimator with robust standard errors and a 𝜒 2 statistic that is asymptotically equivalent to the Yuan-Benter T2* test statistic) (Muthén and Muthén 1998–2017). MLM and MLMV are available only when LISTWISE = ON is specified in the DATA command because they cannot handle missing values. Missingness allowed for MLR is less restrictive than MCAR, but more restrictive than MAR.7 MLR is a preferable ML estimator because it is robust not only to data non-normality, but also to observation non-independence when used with TYPE = COMPLEX, and it enables one to handle missing data in an optimal way. For ordinal categorical outcomes (e.g. binary, ordered categorical), the ADF estimator that is originally designed for non-normal continuous outcomes is often used for SEM estimation (Browne 1984). Mplus provides robust weighted least square estimators, such as weighted least squares (WLS), WLSM (mean-adjusted WLS), and WLSMV (mean and variance-adjusted WLS), for modeling categorical outcomes using a PROBIT link function (Muthén 1984; Muthén et al. 1997; Muthén and Muthén 1998–2017). WLSMV is the default estimator for modeling categorical outcomes in Mplus. The same weight matrix (asymptotic variance/covariance matrix) is used for WLS, WLSM, and WLSMV, but in different ways (Muthén and Satorra 1995). WLS uses the entire weight matrix, while WLSM and WLSMV only use the entire weight matrix for standard errors and tests of fit. For parameter estimation, WLSM and WLSMV use the diagonal of the weight matrix. While WLS provides slightly different model results, WLSM and WLSMV provide identical parameter estimates and standard errors, but slightly different adjusted model 𝜒 2 statistics. Unlike ML estimators, WLS estimators do not use FIML, but pairwise deletion, to handle missing data and do not assume MAR. The WLS estimators, however, are consistent under the MARX (missing at random with respect to X) assumption; that is, only observed 7 According to Muthén (2006, http://www.statmodel.com/discussion/messages/22/1047.html): “The exact condition for which MLR is correct are these: if [Y|X] is non-normal and the missing patterns are either MCAR or MAR but only predicted by X and not Y then MLR gives valid results. This condition in between MCAR and MAR.”

INTRODUCTION TO STRUCTURAL EQUATION MODELING

17

covariate variables are allowed to be related to data missingness (Asparouhov and Muthén 2010a). In other words, missingness allowed for WLS estimators is less restrictive than MCAR, but more restrictive than MAR. ML estimators can also be used for estimating SEM with categorical outcomes. While the default link function is LOGIT, PROBIT is also available for ML estimators by using the LINK = PROBIT statement in the ANALYSIS command of the Mplus program. ML estimators handle missing data better then WLS estimators because they take advantage of FIML and can assume MAR. In addition, ML estimators (e.g. MLM, MLMV, MLR) are robust to data non-normality, while WLS estimators are not; thus, when both categorical and continuous variables are involved in a model and the distributions of the continuous variables are not normal, ML estimators may be used for model estimation. Note that when using ML estimators for modeling categorical outcomes, Mplus does not provide model 𝜒 2 statistics and related model fit indices. In addition, using a numerical integration algorithm FIML is very computationally demanding for modeling categorical outcomes. When there are more latent variables involved in a model and/or we want to estimate error covariances between observed categorical outcomes, WLS estimators are much less computationally demanding.

1.4.1

Bayes estimator

The estimators described previously are traditional frequentist methods, in which the unknown true values of the model parameters are considered fixed, while the data is random (uncertain). The Bayes estimator is an alternative estimator that has gained popularity in SEM in recent years. With the Bayes estimator, the unknown true model parameters are considered random (uncertain), while the data is fixed. Bayesian estimation combines prior distributions about model parameters with the observed data to form posterior distributions for parameter estimates that update the prior knowledge about the parameters with new observed data. A Bayes estimator became available in Mplus 6, and it has become one of the major Mplus estimators since then. We will demonstrate application of Bayesian confirmatory factor analysis (BCFA) and the Bayesian structural equation model (BSEM) in Chapters 2 and 3, respectively. The Bayes estimator has many advantages over the traditional frequentist approaches (Lee and Song 2004; Yuan and MacKinnon 2009; Asparouhov and Muthén 2010c; Muthén 2010; Muthén and Asparouhov 2012a; Kaplan and Depaoli 2012), including, but not limited to the following: (i) Bayesian estimation combines prior information with new data so that researchers’ beliefs or previous findings can be used to inform the current model. (ii) Bayesian estimation has superior performance in small samples without reliance on asymptotic assumptions. (iii) While frequentist estimation treats parameters as constants, Bayesian estimation treats them as variables, and the posterior distribution of the parameter simulation is constructed by iteratively making random draws from Markov Chain Monte Carlo (MCMC) chains. As such, uncertainty in parameter estimates is taken into account, yielding more realistic predictions (Asparouhov and Muthén 2010c). (iv)

18

STRUCTURAL EQUATION MODELING

Bayesian estimation allows one to test SEM with more complex structures (e.g. replace exact zeros constraints on all cross-factor loadings and error covariances with approximate zeros in CFA) that traditional approaches fail to estimate. (v) Bayes is a full-information estimator in Mplus that uses all available data in an optimal way for modeling, assuming MAR. (vi) The simulated missing values in Bayesian estimation can be carried out for multiple imputations (MI) of missing values that can be simultaneously analyzed using Rubin’s method (1987), so that the uncertainty in missing value imputations can be handled (Schafer 1997; Asparouhov and Muthén 2010b). The factor scores imputed from the Bayesian approach are called plausible values of latent variables. Using the plausible values for further analysis can produce more accurate parameter estimates, compared with the traditional approaches (e.g. total score of construct items, estimated factor scores, or latent class membership) in frequentist analysis (Asparouhov and Muthén 2010d). (vii) Bayesian estimation can be readily used for modeling both continuous and categorical outcomes. Finally, (viii) improper solutions (e.g. “out of bounds” estimates, such as negative residual variance) can be avoided in Bayesian estimation by choosing a prior distribution that assigns zero probability to improper solutions. In addition, with a Bayesian estimator, some new models can be used, such as the dynamic structural equation model (DSEM), for which the ML approach is not practical (Asparouhov et al. 2018). However, choosing appropriate priors for Bayesian estimation requires skills and experience. The Bayesian theorem is defined as follows: p(θ|y) =

p(y|θ)p(θ) p(y)

(1.10)

where 𝜃 is model parameters; p(𝜃 ∣ y) is the posterior distribution of 𝜃 given the observed data y; p(y ∣ 𝜃) is the likelihood function of the data given the model parameters; p(𝜃) is the prior distribution of the parameters; and p(y) is observed data expressed in terms of the likelihood function of the data. Unlike the traditional frequentist estimation (ML) that assumes fixed population parameters, each parameter is random in Bayesian simulation. The uncertainty of the parameter is represented by the prior distribution (or prior p(𝜃)), which is the knowledge about the parameter before observing the data. The product p(y ∣ 𝜃)p(𝜃) in Eq. (1.10) indicates that the prior information of the parameter is weighted or updated by the observed data (new information) to yield the posterior parameter distribution, which can be considered a compromise of the prior information and the new information. Analytically solving for posterior quantities of the model parameter 𝜃 from the multidimensional posterior distribution p(𝜃 ∣ y) is difficult because it involves integrating a complex, multidimensional probability distribution. The MCMC algorithm (Gelman et al. 2004) is most often used in Bayesian estimation to simplify the complex high-dimensional problem. MCMC is composed of two components: the Markov chain and Monte Carlo integration. The Markov chain part of MCMC iteratively

INTRODUCTION TO STRUCTURAL EQUATION MODELING

19

draws random values of parameters from the Bayesian posterior distribution with numerous iterations. Different samplers or sampling methods – e.g. Metropolis-Hastings (Chib and Greenberg 1995) and Gibbs sampling (Geman and Geman 1984) – can be used for Markov chain sampling. The default in Mplus is Gibbs sampling. A sequence of parameter values randomly drawn in multiple iterations is called a Markov chain, in which each value depends only on the previous one (this is known as the Markov memoryless property). With different starting values set by different seeds for random draws, more than one chain can be generated for a parameter from different locations in the posterior distribution. In each chain, the initial iterations of a parameter are discarded because the values generated from the iterations do not adequately represent the posterior distribution (Mplus discards the first half of the iterations of a chain). Throwing away some initial iterations is called burn-in, which is just like warming up the engine before driving a car. After the burn-in period, the simulation continues to randomly draw values from the posterior distribution. Once the Markov chain has reached a stationary distribution, the chain is considered converged. The parameter values simulated from the iterations after the burn-in period are used to generate the desired or target posterior distribution for parameter estimation. The Monte Carlo part of the MCMC is used for posterior point estimates (e.g. median, mean, mode), their standard deviations, and credibility intervals by approximating the integrals in statistics.

1.5 Model fit evaluation A key feature of SEM is to conduct an overall model fit test on the basic hypothesis Σ = Σ(𝜃) : that is, to assess the degree to which the model estimate ̂ differs from the observed sample variance/covariance variance/covariance matrix Σ matrix S (Hoelter 1983; Bollen 1989; Jöreskog and Sörbom 1989; Bentler 1990). ̂ is not significantly different If the model-estimated variance/covariance matrix, Σ, from the observed data covariance matrix, S, then we say the model fits the data well, and we accept the null hypothesis or say the model supports the plausibility of postulated relations among the variables; otherwise, the model does not fit the data, and the null hypothesis should be rejected. The overall model fit evaluation should be done before interpreting the parameter estimates. Without evaluating the model fit, any conclusion from the model estimation could be misleading. ̂ numerous model fit indices have been developed. To assess the closeness of S to Σ, For detailed information on model fit testing and model fit indices, readers are referred to Marsh et al. (1988), Bollen (1989), Gerbing and Anderson (1993), Tanaka (1993), Hu and Bentler (1995, 1998, 1999). Most SEM software/programs (e.g. LISREL, EQS, AMOS) provide a long list of model fit indices. However, only a few model fit indices are actually reported in real studies. In the following, we focus on the model fit indices that Mplus provides and that are commonly reported in SEM applications.

20

STRUCTURAL EQUATION MODELING

1.5.1

The model 𝝌 2 statistic

The 𝜒 2 is the original fit index for structural equation models, which is defined as8 : 𝜒 2 = fML (N − 1)

(1.11)

̂ is the minimum value of the fitting function for the specified where fML = F(S, Σ) model (see Appendix 1.B at the end of this chapter), and N is the sample size. This product is distributed as 𝜒 2 if the data are multivariate normal, and the specified model is correct. The 𝜒 2 statistic assesses the magnitude of discrepancy between the sample and the model-estimated variance/covariance matrices. Different from traditional statistical testing, instead of a significant 𝜒 2 statistical test, a non-significant 𝜒 2 is desired. That is, we expect that the test does not reject the null hypothesis (H0 : the residual matrix is zero or there is no difference between the model-estimated variances/covariances and the observed sample variances/covariances). As a matter of fact, this 𝜒 2 is a badness-of-fit measure in the sense that a large 𝜒 2 corresponds to a bad fit, a small 𝜒 2 to a good fit, and a 𝜒 2 value of 0 to a perfect fit. The 𝜒 2 statistic is a conventional overall test of fit in SEM. Before the 𝜒 2 statistic was developed by Jöreskog (1969), factor analysis was simply based on subjective decisions. The 𝜒 2 statistic provides, for the first time, a means of evaluating factor analysis models with more objective criteria. However, the 𝜒 2 statistic has some explicit limitations. (i) 𝜒 2 is defined as N – 1 times the fitting function; thus, it is highly sensitive to sample size. The larger the sample size, the more likely it is to reject the model, and thus it is more likely to have a Type I error (rejecting the correct hypothesis). The probability of rejecting a model substantially increases when the sample size increases, even when the difference between the observed and the model estimated variance/covariance matrices is trivial. (ii) When the sample size is small, the fitting function may not follow a 𝜒 2 distribution. (iii) 𝜒 2 is very sensitive to violations of the assumption of multivariate normality. The 𝜒 2 value increases when variables have highly skewed and kurtotic distributions. (iv) 𝜒 2 increases when the number of variables in a model increases. As such, the significance of the 𝜒 2 test should not be a reason by itself to reject a model. In addition to 𝜒 2 , one also often looks at the relative 𝜒 2 statistic, which is the 2 ratio of 𝜒 2 to its degrees of freedom ( 𝜒df ). A ratio of 2 or less is generally interpreted as indicating adequate fit (Brookings and Bolton 1988). To address the limitations of the 𝜒 2 test, a number of model fit indexes have been proposed for model fit tests.

1.5.2

Comparative fit index (CFI)

As the name implies, Bentler’s (1990) comparative fit index (CFI) compares the specified model fit with the null model that assumes zero covariances among the observed variables. This measure is directly based on the non-centrality parameter In most SEM computer programs, model 𝜒 2 is defined as 𝜒 2 = fML (N − 1), but it is defined as 𝜒 2 = fML (N) in Mplus. 8

INTRODUCTION TO STRUCTURAL EQUATION MODELING

21

d = (𝜒 2 − df), where df is the degrees of freedom of the model. The CFI is defined as: dnull − dspecified (1.12) CFI = dnull where dnull and dspecified are the rescaled non-centrality parameters for the null model and the specified model, respectively. The CFI is defined as the ratio of improvement in non-centrality (moving from the null to the specified model) to the non-centrality of the null model. As the null model has the worst fit, thus it has considerably higher non-centrality (larger d) than a specified model. The value of CFI has a range from 0 to 1 (if outside this range, it is reset to 0 or 1). If the specified model fits the data perfectly, then dspecified = 0, leading to CFI = 1. Traditionally, the rule of thumb reasonable cutoff for the fit index is 0.90. However, Hu and Bentler (1998, 1999) suggest raising this minimum rule of thumb from 0.90 to 0.95. The CFI is a good fit index even in small samples (Bentler 1995). However, the CFI depends on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high.

1.5.3

Tucker Lewis index (TLI) or non-normed fit index (NNFI)

The Tucker Lewis index (TLI) (Tucker and Lewis 1973) is also called the non-normed fit index (NNFI) by Bentler and Bonett (1980). The TLI is simply another way to compare the lack of fit of a specified model to the lack of fit of the null model. TLI is defined as: ) ( 2 𝜒 2specified 𝜒 null − df null df specified (1.13) TLI = ) ( 2 𝜒 null −1 df null

where 𝜒 2null ∕df null and 𝜒 2specified ∕df specified are ratios of 𝜒 2 statistics to the degrees of freedoms of the null model and the specified model, respectively. As such, the TLI has a penalty for model complexity because the more free parameters, the smaller dfspecified , and thus the larger 𝜒 2specified ∕df specified , leading to a smaller TLI. The TLI is not guaranteed to vary from 0 to 1. If its value is outside the 0–1 range, then it is reset to 0 or 1. A negative TLI indicates that the 𝜒 2 /df ratio for the null model is less than the ratio for the specified model. This situation might occur if the specified model has very few degrees of freedom and correlations among the observed variables are low. Although the TLI tends to run lower than CFI, the recommended cutoff value for the TLI is the same as for the CFI. A TLI value lower than 0.90 indicates a need to re-specify the model. Like the CFI, the TLI depends on the average size of the correlations in the data. If the average correlation between variables is not high, then the TLI will not be very high. Unlike the CFI, the TLI is moderately corrected for parsimony: its value estimates the relative model fit improvement per degree of freedom over the null model (Hoyle and Panter 1995). The TLI is often reported along with the CFI; this

22

STRUCTURAL EQUATION MODELING

is akin to reporting the adjusted goodness-of-fit (AGFI) along with the GFI when LISREL or other SEM programs are used for modeling.

1.5.4

Root mean square error of approximation (RMSEA)

This is the most popular measure of model fit for SEM. The root mean square error of approximation (RMSEA) is a standardized measure of error of approximation. The error of approximation means the lack of fit of the specified model to the population. This measure is based on the non-centrality parameter, as defined here: √ RMSEA =

√ (𝜒 2S − df S )∕N df S

=

(𝜒 2S ∕df S ) − 1 N

(1.14)

where (𝜒 2s − df s )∕N is the rescaled non-centrality parameter. The RMSEA is a parsimonious measure. By adjusting for the model degrees of freedom dfs , it measures average lack of fit per degree of freedom. The values of RMSEA are often interpreted as: 0 = perfect fit; 0.10 = poor fit (Browne and Cudeck 1993; MacCallum et al. 1996; Byrne 1998). Some also suggest a RMSEA of 0.06 (Hu and Bentler 1999) as the cutoff. The RMSEA cutoffs are definitions for the population. To understand the sampling error in the RMSEA, a 90% confidence interval (CI) is computed for the RMSEA in the Mplus output. The CIs of RMSEA are asymmetric around the point estimate and range from zero to positive infinity (Browne and Cudeck 1993). Ideally, the lower value of the 90% CI should be very near zero (or no worse than 0.05) and the upper value should be less than 0.08. In addition, there is a one-sided test of the null hypothesis (H0 : RMSEA 0.05), then we cannot reject the null hypothesis; therefore, the model is considered to have a close fit; otherwise, the model’s fit is worse than close fitting (i.e. the RMSEA is greater than 0.05). A P-value > 0.05 for the close-fit test is desirable in model-fit evaluation. The RMSEA has become an increasingly used model-fit index in applications of SEM, and simulation studies have shown that the RMSEA performs better than other fit indices (Browne and Arminger 1995; Browne and Cudeck 1993; Marsh and Balla 1994; Steiger 1990; Sugawara and MacCallum 1993).

1.5.5

Root mean-square residual (RMR), standardized RMR (SRMR), and weighted RMR (WRMR)

These are residual-based model-fit indices. The root mean-square residual (RMR) is the square root of the average residual. As mentioned earlier, residuals in SEM are differences in the elements between the sample variance/covariance matrix (S) and ̂ The RMR is defined as (Jöreskog the model implied variance/covariance matrix (Σ). and Sörbom 1981): (∑ ∑ )1∕2 (sjk − 𝜎 RMR = ̂jk )2 ∕e (1.15)

INTRODUCTION TO STRUCTURAL EQUATION MODELING

23

where sjk and 𝜎 ̂jk are elements in the observed variance/covariance matrix S and the ̂ respectively, e = (p + q)(p + q + 1)/2, model estimated variance/covariance matrix Σ, and (p + q) is the total number of observed variables.9 The RMR is a badness-of-fit index (larger values signal worse fit) ranging from 0.0 to 1.0. A value of RMR less than 0.08 is considered a good fit (Hu and Bentler 1999). The value of this index tends to be smaller when the sample size and the number of parameters in the model increase. The standardized root mean-square residual (SRMR) is a standardized version of the RMR based on standardized residuals. It is defined as (Bentler 1995; Muthén 1998–2004): ((∑ ∑ ) )1∕2 (1.16) r2 ∕e SRMR = where rjk is a residual in a correlation metric; i.e. the difference in the element between the observed correlation matrix and the model estimated correlation matrix (Muthén 1998–2004) is: ) ⎛ ( ⎞ 𝜎 ̂jk sjk ⎜ ⎟ − ⎜√ √ ⎟ (1.17) rjk = √ √ sjj skk ⎜ 𝜎 ̂jj 𝜎 ̂kk ⎟ ⎝ ⎠ where sjk is the sample covariance between the observed variables yj and yk and 𝜎 ̂jk is the corresponding model estimated covariance, sjj and skk are sample variances, ̂kk are model-estimated variances, respectively. A value of the SRMR and 𝜎 ̂jj and 𝜎 less than 0.08 is considered a good fit (Hu and Bentler 1999), and less than 0.10 is acceptable (Kline 2005). The weighted root mean-square residual (WRMR) is another variant of RMR, which is defined as (Muthén 1998–2004): }1∕2 {∑ ∑ ̂jk )2 (sjk − 𝜎 ∕e (1.18) WRMR = 𝜐jk where (sjk − 𝜎 ̂jk ) is the residual, 𝜐jk is the estimated asymptotic variance of sjk , and e is the total number of sample variances and covariances. For categorical outcomes, the WRMR is defined as the following and is available with the diagonally weighted least-squares estimators (e.g. WLSM and WLSMV) (Muthén 1998–2004): √ 2nF(̂ θ) (1.19) WRMR = e ̂ is the minimum of the WLS fitting function. A WRMR value of 1.0 or where F(𝜃) lower is considered a good fit (Yu 2002). Note that the WRMR is considered an experimental test statistic. It is likely that the WRMR may show a bad model fit, while other indices show a good fit. WRMR is not available any more in the latest version of Mplus. 9 For p endogenous indicators and q exogenous indicators, the total number of distinct variances and covariances in the sample equals (p + q)(p + q + 1)/2.

24

STRUCTURAL EQUATION MODELING

1.5.6

Information criteria indices

Information criteria statistics are relative model-fit statistics that are commonly used for model comparisons, including comparing non-nested models. The general form of information criteria statistics is defined as (Sclove 1987): −2 ln(L) + a(n)m

(1.20)

where L is the ML of model, with larger values indicating a better fit. The possible values of −2ln(L) range from 0 to ∞, with smaller values indicating a better fit. The term a(n)m in Eq. (1.20) is considered a penalty added to −2ln(L) for model complexity, where n and m represent sample size and model free parameters, respectively. Mplus provides three types of information criteria statistics: Akaike’s information criterion (1973, 1983) (AIC), Bayesian information criterion (BIC), and Schwarz criterion [also Schwarz Bayesian criterion (SBC) or Schwarz Bayesian information criterion (SBIC)] (Schwarz 1978), and sample-size adjusted BIC (ABIC) (Sclove 1987), defined as follows, respectively: AIC = −2 ln(L) + 2m

(1.21)

BIC = −2 ln(L) + ln(n)m

(1.22)

ABIC = −2 ln(L) + ln(n*)m

(1.23)

These equations are all special cases of Eq. (1.20). For AIC, the term a(n) in Eq. (1.20) is replaced with 2m, regardless of sample size; whereas the a(n) penalty term from Eq. (1.20) is replaced with ln(n) for BIC. For ABIC, sample size n is replaced with n* = (n + 2)/24 to somewhat reduce the penalty for larger sample sizes (Sclove 1987; Muthén 1998–2004). Clearly, BIC and ABIC impose more penalties than AIC for model complexity, because sample size is included in the penalty term; thus BIC and ABIC favor smaller models with fewer free parameters. With so many model-fit indexes proposed, no single index should be relied on exclusively for testing a hypothesized structural equation model. It is recommended that multiple fit indices should be reported for model evaluation in order to avoid making inaccurate conclusions about a model’s fit to the data (Bollen 1989; Bollen and Long 1992; Tanaka 1993; Bentler 2007). Brown (2015) recommends reporting at least one index from each category of absolute fit, incremental fit, and parsimony correction. CFI, TLI, and RMSEA are commonly reported with the 𝜒 2 statistic in many studies. Importantly, the model-fit indices indicate the overall model fit on average. A model with excellent fit indices does not necessarily mean the model is a correct model. First, other model components are also important for model evaluation. For example, coefficient estimates should be interpretable, R-squares of equations are acceptable, and there are no improper solutions (e.g. negative variance, correlation value less than −1 or greater than 1). Problems in the model components indicate that some parts of the model may not fit the data. Second, there may be many models

INTRODUCTION TO STRUCTURAL EQUATION MODELING

25

that fit the data equally well, as judged by model-fit indices. Among these equivalent models, the parsimonious model should be accepted. In addition, model evaluation is not entirely a statistical matter. It should also be based on sound theory and empirical findings. If a model makes no substantive sense, it is not justified even if it statistically fits the data very well.

1.5.7

Model fit evaluation with Bayes estimator

When a Bayes estimator is used for model estimation, the traditional model-fit statistics/indices do not apply. In Bayesian analysis, it is important to carefully check whether the Markov chain has converged or reached stationarity. Before proceeding to make any inference, not only for the parameters of interest, convergence of estimation for all parameters should be checked. Various statistical diagnostic tests can help assess Markov chain convergence, although no test is conclusive. The main criterion used in Mplus for assessing convergence is the Gelman-Rubin convergence criterion (Gelman and Rubin 1992) based on the potential scale reduction (PSR) (Asparouhov and Muthén 2010c). The PSR is defined as the ratio of the total variance across chains over the pooled within-variance: √ W +B (1.24) PSR = W where W and B are the pooled within- and between-chain variation of the posterior parameter estimates, respectively. If PSR is close to 1 (e.g. between 1 and 1.1) (Gelman et al. 2004) for all the parameters in the model, it indicates that the between-chain variation is small relative to the within-chain variation, and thus convergence is considered to have been achieved. Note that the PSR is closely related to the intraclass correlation coefficient (ICC).10 The closer to 1 the PSR is, the smaller the ICC is, indicating that different iterative processes (i.e. different chains) yield similar parameter estimates, and thus the MCMC process is converged or stationary. Mplus 7 and later versions also provide the Kolmogorov-Smirnov test (KS) (Chakravarti et al. 1967) for assessing convergence by comparing posterior distributions across chains. The hypothesis for the KS test is that the posterior distributions of a parameter estimate are equal across chains. If the KS test does not reject the hypothesis (i.e. P > 0.05), it indicates that convergence in the MCMC sequence has been achieved. However, “for more complex models P-values > 0.001 can be interpreted as confirmed convergence” (Muthén and Asparouhov 2012b, slide 51). In addition, graphics options for posterior parameter trace plots and autocorrelation plots are also available in Mplus to monitor the posterior distributions (Muthén and Muthén 1998–2017). After convergence of the MCMC algorithm is confirmed, model fit can be assessed using Bayesian posterior predictive checking (PPC) (Gelman et al. 1996; Scheines et al. 1999; Muthén 2010; Kaplan and Depaoli 2012). For continuous 10

( 1−

1 PSR2

)

( = 1−

W W+B

) =

B W+B

= ICC.

26

STRUCTURAL EQUATION MODELING

outcomes and the continuous latent response variables of categorical outcomes, the model 𝜒 2 fit function [f(Yt , X, 𝜃 t )] is computed using the current parameter estimates ̂t and the observed data in MCMC iteration t. In addition, a replicated data set Y of the same size as the original data set is generated using the current parameter ̂ t , X, θt )] is computed from the replicated estimates, and the 𝜒 2 fit function [f(Y ̂ t , X, θt )]is computed every tenth data. The difference between [f(Yt , X, 𝜃 t )] and [f(Y iteration in Mplus, resulting in a distribution of the 𝜒 2 fit function difference, and a symmetric 95% CI for this difference is produced. For an excellent-fitting model, a 𝜒 2 fit function difference of zero falls close to the middle of the 95% CI. If zero is not covered by the 95% CI, it indicates that the model does not fit the data (Muthén 2010; Muthén and Asparouhov 2012a; Asparouhov and Muthén 2010c). In PPC, the posterior predictive P-value (PPP) is computed as (Asparouhov and Muthén 2010c; Muthén and Asparouhov 2012a): ∑m d ̂ t , X, θt )} = t=1 t (1.25) PPP = P{f(Yt , X, θt ) < f(Y m ̂ t , X, θt ). At the tth iteration, where dt is the difference between f(Yt , X, 𝜃 t ) and f(Y ̂ t , X, θt ), then dt = 1; otherwise, dt = 0. Thus PPP is simply if f(Yt , X, 𝜃 t ) and f(Y the proportion of times in m iterations that the 𝜒 2 fit function of the observed data is smaller (i.e. better) than that of the replicated data. The PPP is considered akin to a SEM fit index (Muthén and Asparouhov 2012a). Small PPP values (e.g. 0.05, we would interpret the discrepancy between the model replicated data and the observed data as not statistically significant, and thus the model fit would be adequate.

1.5.8

Model comparison

In SEM, it is recommended to consider alternative models rather than to examine a single model, so that the best-fit model can be determined by model comparisons (Bollen and Long 1993). As in other statistical model comparisons, the likelihood ratio (LR) test is often used for model comparison in SEM for two nested models estimated from the same data set. For two models to be nested – for example, Model B nested within Model A – Model B must have fewer free parameters and, therefore, a larger number of df than Model A. In addition, the parameters in Model B cannot include new parameters that are not included in Model A. Once these two conditions are satisfied, the difference in the model 𝜒 2 or likelihood function between the two models will follow a 𝜒 2 distribution, with df that is the difference in df between the two models.

INTRODUCTION TO STRUCTURAL EQUATION MODELING

27

It is important to remember that when some robust estimators, such as MLM, MLMV, MLR, ULSMV, WLSM, and WLSMV, are used for model estimation, the model 𝜒 2 statistics cannot be used for LR test in the regular way because the difference in the model 𝜒 2 statistic between two nested models does not follow a 𝜒 2 distribution (Muthén and Muthén 1998–2017). Such difference testing will be discussed with examples in the next chapter. For models that are not nested, information criteria measures, such as AIC, BIC, and ABIC, can be used for model comparison. The model with smaller information measures has a better fit. These information measures are important parsimony-corrected indices that can be used to compare both non-nested as well as nested models. Raftery (1996), based on Jeffreys (1961), suggests some guidelines for the strength of evidence favoring one model against another model based on a difference in absolute value of BIC: 0–2: weak evidence; 2–6: positive evidence; 6–10: strong evidence; and 10+: very strong evidence. For BSEM, Mplus provides the deviance information criterion (DIC) (Spiegelhalter et al. 2002; Muthén 2010) for model comparisons. The DIC is a Bayesian generalization of the AIC that balances model parsimony and fit (Gill 2008). The DIC is defined by Spiegelhalter et al. (2002) as: DIC = pD + D

(1.26)

where D is the posterior mean of the deviance, and pD is D minus D(𝜃) – the deviance of the posterior means. Because a deviance can be negative, the DIC is allowed to be negative. Like the AIC and BIC, the DIC can be used for comparing models no matter whether they are nested or not. Smaller DIC values indicate better models.

1.6 Model modification In applications of SEM, one usually specifies a model based on theory or empirical findings and then fits the model to the available data. Very often, the tentative initial model may not fit the data. In other words, the initial model is somewhat misspecified. In such a case, the possible sources of lack of model fit need to be assessed to determine what specifically is wrong with the model specification; then the model is modified and retested using the same data. This process is called model specification search. To improve the initial model that does not fit satisfactorily the data, most often the modification indices (Sörbom 1989) that are associated with the fixed parameters of the model are used as diagnostic statistics to capture model misspecification. A MI indicates the decrease in model 𝜒 2 statistic, with 1 df indicating whether a particular parameter is freed from a constraint in the preceding model. A high MI value indicates that the corresponding fixed parameter should be freed to improve model fit. Although a drop in 𝜒 2 of 3.84 with 1 df indicates a significant change in 𝜒 2 at P = 0.05 level, there is no strict rule of thumb concerning how large MIs must be to warrant a meaningful model modification. With the MODINCES (or MOD) option in the OUTPUT

28

STRUCTURAL EQUATION MODELING

command, by default parameters with a MI ≥ 10 will be listed in Mplus output. To get all parameters with a MI ≥ 3.84, the option should be specified as MOD(3.84). If there are several parameters with high MIs, they should be freed one at a time, beginning with the largest MI, because change in a single parameter in a model could affect other parts of the solution (MacCallum et al. 1992). Freeing some parameters may improve model fit; however, the model modification must be theoretically meaningful. Mplus also provides the expected parameter change (EPC) and standardized EPC index of the expected change in the value of a parameter if that parameter was freed (Saris et al. 1987). Together with MIs, EPCs provide important information of model re-specification. In addition, an important approach for checking lack of model fit is to examine the model residuals. Unlike the residuals in multiple regressions, the residuals in SEM are ̂ where S is the sample variance/covariance the elements in the residual matrix (S − Σ), ̂ matrix and Σ is the model estimated variance/covariance matrix. The residuals are dependent upon the measurement scale of the observed variables, and thus are not quite meaningful because the observed variables often have various metrics. To avoid this problem, the residuals are often standardized, i.e. divided by their asymptotical standard errors (Jöreskog and Sörbom 1989). Although standardized residuals are not technically a model fit index, they provide useful information about how close the estimated variances/covariances are to those observed. A large standardized residual ̂ indicates a large discrepancy in a specific variance or covariance between S and Σ. A standardized residual is considered large if it is greater than 2.58 in magnitude (Jöreskog and Sörbom 1989). It must be emphasized that the model modification or re-specification should be both data-driven and theory-driven. Any model modification must be justifiable on a theoretical basis and through empirical findings. Blind use of modification indices for model modification should be avoided. Parameters should not be added or removed solely for the purpose of improvement in model fit. Our goal is to find a model that fits the data well from a statistical point of view; and, importantly, all the parameters of the model should have a substantively meaningful interpretation.

1.7 Computer programs for SEM A wide variety of computer programs/software has been developed in the past two decades for SEM. There are several well-known specialized packages, such as LISREL (Jöreskog and Sörbom 2015), AMOS (Arbuckle 2014), EQS (Bentler 2006), Mplus (Muthén and Muthén 1998–2017), Proc CALIS in SAS (SAS Institute Inc. 2013), SEM in STATA (Acock 2013), SEM Package in R (Fox 2006), lavaan (Rosseel 2012), OpenMx (Neale et al. 2016), etc. Each computer program has its own strengths and unique features. Which program should be used is basically a personal preference. In this book, the computer program Mplus is used for model demonstration. Mplus is the successor to the computer program LISCOMP (Muthén 1988). While

INTRODUCTION TO STRUCTURAL EQUATION MODELING

29

retaining most of LISCOMP’s features for SEM of categorical and continuous data, Mplus, particularly in Mplus 7 and later versions, comes with many important additions. It has great flexibility in handling numerous types of models with continuous and categorical outcomes, as well as categorical latent variables. Mplus has advanced capabilities for handling data non-normality, complex survey data, multi-population/group data, incomplete data, and intensive time-series processes. Some recently developed advanced structural equation models, such as continuous time survival SEM (Larsen 2005), multilevel mixture SEM (Asparouhov and Muthén 2008), multigroup multilevel analysis (Asparouhov and Muthén 2012a), exploratory SEM (Asparouhov and Muthén 2009), Bayesian structural equation modeling (BSEM, Asparouhov and Muthén 2010c; Muthén and Asparouhov 2012a), dynamic SEM (DSEM, Asparouhov et al. 2018; Asparouhov and Muthén 2018a), etc. can be readily implemented in Mplus. Because of lack of data, those models, except BSEM and DSEM, are not demonstrated in the current edition of the book. The models demonstrated in this book are intended to show readers how to build structural equation models in Mplus using both cross-sectional and longitudinal data. Mplus syntax for the models is provided in each corresponding chapter of the book. The data used for model demonstration in the book will be available from the first author upon request.

30

STRUCTURAL EQUATION MODELING

Appendix 1.A Expressing variances and covariances among observed variables as functions of model parameters Let’s denote Σ as the population variance/covariance matrix of variables y and x. Then [ ] E(YY ′ ) E(XY ′ )′ Σ= (1.A.1) E(XY ′ ) E(XX ′ ) where the diagonal elements are variances of the variables y and x, respectively, and the off-diagonal elements are covariances among y and x. In SEM, it is hypothesized that the population variance/covariance matrix of y and x can be expressed as a function of the model parameters 𝜃, that is: Σ = Σ(𝜃)

(1.A.2)

where Σ(𝜃) is called the model implied variance/covariance matrix. Based on the three basic SEM equations Eq. (1.1), we can derive that Σ(𝜃) can be expressed as functions of the parameters in the eight fundamental SEM matrixes. Let’s start with the variance/covariance matrix of y, then the variance/covariance matrix of x and the variance/covariance matrix of y and x, and finally assemble them together. The variance/covariance matrix of y can be expressed as the following: E(YY′ ) = E[(Λy η + 𝜀)(Λy η + 𝜀)′ ] = E[(Λy η + 𝜀)(η′ Λ′y + 𝜀′ )] = E[Λy ηη′ Λ′y ] + Θ𝜀 = Λy E[ηη′ ]Λ′y + Θ𝜀

(1.A.3)

where Θ𝜀 is the variance/covariance matrix of the error term 𝜀. As η = Bη + Γ𝜉 + ζ then η = (I − B)−1 (Γ𝜉 + ζ) ηη′ = [(I − B)−1 (Γ𝜉 + ζ)][(I − B)−1 (Γ𝜉 + ζ)]′ = [(I − B)−1 (Γ𝜉 + ζ)]{(Γ𝜉 + ζ)′ [(I − B)−1 ]′ } = [(I − B)−1 (Γ𝜉 + ζ)]{(𝜉 ′ Γ′ + ζ′ )[(I − B)−1 ]′ }

(1.A.4)

Assuming that 𝜁 is independent of 𝜉, then ′

E(ηη′ ) = (I − B)−1 (ΓΦΓ′ + Ψ)(I − B)−1

(1.A.5)

where Φ is the variance/covariance matrix of the latent variable 𝜉; Ψ is the variance/covariance matrix of the residual 𝜁 . Substituting Eq. (1.A.5) into Eq. (1.A.3), we have: E(YY′ ) = Λy {(I − B)−1 (ΓΦΓ′ + Ψ)[(I − B)−1 ]′ }Λ′y + Θ𝜀

(1.A.6)

INTRODUCTION TO STRUCTURAL EQUATION MODELING

31

This equation implies that the variances/covariances of the observed y variables are a function of model parameters such as factor loadings Λy , path coefficients B and Γ, the variances/covariances Φ of the exogenous latent variables, residual variances/covariances matrix Ψ, and the error variances/covariances Θ𝜀 . The variance/covariance matrix of x can be expressed as the following: E(XX′ ) = E[(Λx 𝜉 + 𝛿)(Λx 𝜉 + 𝛿)′ ] = E[(Λx 𝜉 + 𝛿)(𝜉 ′ Λ′x + 𝛿 ′ )]

(1.A.7)

Assuming that the 𝛿 is independent of 𝜉, then E(XX′ ) = E[Λx 𝜉𝜉 ′ Λ′x + 𝛿𝛿 ′ ] = Λx ΦΛ′x + Θ𝛿

(1.A.8)

where Θ𝛿 is the variance/covariance matrix of the error term 𝛿. Equation (1.A.8) implies that the variances/covariances of the observed x variables are a function of model parameters, such as the loadings Λx , the variances/covariances Φ of the exogenous latent variables, and the error variances/covariances Θ𝜀 . The covariance matrix among x and y can be expressed as the following: E(XY′ ) = E[(Λx 𝜉 + 𝛿)(Λy η + 𝜀)′ ] = E[(Λx 𝜉 + 𝛿)(η′ Λ′y + 𝜀′ )]

(1.A.9)

Assuming that 𝛿 and 𝜀 are independent of each other and independent of the latent variables, then E(XY′ ) = E(Λx ξη′ Λ′y ) = Λx E(ξη′ )Λ′y = Λx E{ξ[(I − B)−1 (Γ𝜉 + ζ)]′ }Λ′y = Λx E{ξ{(Γ𝜉 + ζ)′ [(I − B)−1 ]′ }}Λ′y = Λx E{ξξ′ Γ′ [(I − B)−1 ]′ + ξζ′ [(I − B)−1 ]′ }Λ′y = Λx ΦΓ′ [(I − B)−1 ]′ Λ′y

(1.A.10)

Thus, the variances and covariances among the observed variables x and y can be expressed in terms of the model parameters: [ ] Λ {(I − B)−1 (ΓΦΓ′ + Ψ)[(I − B)−1 ]′ }Λ′y + Θ𝜀 Λy (I − B)−1 ΓΦΛ′x Σ(𝜃) = y Λx ΦΓ′ [(I − B)−1 ]′ Λ′y Λx ΦΛ′x + Θ𝛿 (1.A.11) where the upper-right part of the matrix is the transpose of the covariance matrix among x and y. Each element in the model implied variance/covariance matrix Σ(𝜃) is a function of model parameters. For a set of specific model parameters from the eight SEM fundamental matrixes that constitute a structural equation model, there is one and only one corresponding model implied variance/covariance matrix Σ(𝜃).

32

STRUCTURAL EQUATION MODELING

Appendix 1.B Maximum likelihood function for SEM In structural equation model estimation, attention is directed to the sample distribution of the observed variance/covariance matrix S. If a random sample is selected from a multivariate normal population, the likelihood of finding a sample with variance/covariance matrix S is given by the Wishart distribution (Wishart 1928): 1

e− 2 n

W(S, Σ, n) = 1 n 2

|Σ| 2

1 nK 2

𝜋

• tr(SΣ−1 )

1 K(K−1) 4

1

|nS| 2 (n−K−1) ( ) K ∏ Γ 12 (n + 1 − k)

(1.B.1)

k=1

where S is the sample variance/covariance matrix, Σ is the population variance/covariance matrix, n = N − 1 (N is sample size), K is the number of variables, and Γ is the gamma function. Note that all the terms in Eq. (1.B.1), except those involving Σ, are constant. Since we are only interested in maximizing the function rather than in calculating the precise value of the function, all the constant terms in Eq. (1.B.1) can be combined into one constant term C, and thus the equation can be simplified as: 1

W(S, Σ, n) =

e− 2 n

• tr(SΣ−1 )

C

1

|Σ| 2 n 1

= e− 2 n

• tr(SΣ−1 )

1

|Σ|− 2 n C

(1.B.2)

̂ = S. As such, the ratio of the Wishart For a model that fits the data perfectly, Σ function of the specified model to that of the perfect model is: LR =

1

• tr(SΣ−1 )

|Σ|− 2 n C

1

• tr(SS−1 )

|S|− 2 n C

e− 2 n e− 2 n 1

= e− 2 n

• tr(SΣ−1 )

1

1

1

1

|Σ|− 2 n e 2 n

• tr(SS−1 )

1

|S| 2 n

(1.B.3)

Taking a natural logarithm, we have 1 1 1 1 Ln(LR) = − n • tr(SΣ−1 ) − n • ln ∣ Σ ∣ + n • tr(SS−1 ) + n • ln S 2 2 2 2 1 = − n[tr(SΣ−1 ) + ln ∣ Σ ∣ −tr(SS−1 ) − ln S] 2 1 (1.B.4) = − n[tr(SΣ−1 ) + ln ∣ Σ ∣ −(p + q) − ln S] 2 Since a minus sign precedes the right side of Eq. (1.B.4), maximizing the equation is equivalent to minimizing the function in its brackets: ̂ ∣ + tr(SΣ ̂−1 ) − ln S − (p + q) FML (𝜃) = ln ∣ Σ

(1.B.5)

where FML (𝜃) or FML is called the minimum discrepancy function, which is the value of the fitting function evaluated at the final estimates (Hayduk 1987).

2

Confirmatory factor analysis 2.1 Introduction As discussed in Chapter 1, the key difference between path analysis and structural equation modeling (SEM) is, the former analyzes relationships among observed variables, while the latter focuses on relationships among latent variables (latent constructs or factors). In order to conduct SEM, latent variables/factors must be defined appropriately using a measurement model before they are incorporated into a structural equation model. Latent variables are unobservable and can only be indirectly estimated from observed indicators/items. Traditionally, the exploratory factor analysis (EFA) technique is applied to determine the underlying factorial structure of a measurement instrument (Comrey and Lee 1992; Gorsuch 1983; Mulaik 1972). EFA extracts unobserved factors from a set of observed indicator variables without specifying the number of factors or without determining how the observed indicator variables load onto specific factors; instead, factors are defined after they are extracted. In other words, EFA is applied in situations where the factorial structure or the dimensionality of an instrument for a given population is unknown, usually in the situation of developing new instruments. In contrast, confirmatory factor analysis (CFA) (Bollen 1989; Brown 2015) is used in situations where one has some knowledge of the dimensionality of the instrument under study, based either on a theory or on empirical findings. The factors are theoretically defined, and how specific indicators or measurement items load onto which factors is hypothesized before testing the model. Researchers wish to determine and confirm that the factorial structures of the scales in the instrument under study are as hypothesized. In application of CFA, researchers are interested mainly in evaluating the extent to which a set of indicators/items in a particular instrument actually measures the latent variables/factors they are designed to measure. Some features of CFA are different from EFA. (i) All factors in EFA are either uncorrelated (orthogonal) or correlated (oblique). In CFA, relationships among Structural Equation Modeling: Applications Using Mplus, Second Edition. Jichuan Wang and Xiaoqian Wang. © 2020 John Wiley & Sons Ltd. Published 2020 by John Wiley & Sons Ltd.

34

STRUCTURAL EQUATION MODELING

factors can be flexibly specified on a theoretical basis or based on empirical findings. (ii) Observed indicators/items in EFA load onto all the factors, while observed indicators/items in CFA only load onto factors that they are hypothesized to measure. However, an indicator may also load onto one or more factors in CFA based on theoretical concern. As a result, a CFA model is not only theoretically more meaningful, but also more parsimonious, because the factor loadings of indicators to the irrelevant factors are all fixed, a priori, at 0 in CFA model, thus substantially reducing the number of parameters to estimate. (iii) Measurement errors are not allowed to be correlated in EFA, but this is not the case in CFA. However, appropriate specifications of error correlations in CFA can be used to test method effects (Marsh 1996; Tomás and Oliver 1999; Wang et al. 2001). (iv) CFA can be simultaneously conducted in multiple groups so that measurement invariance across groups can be tested. Finally, (v) covariates can be readily included to predict the factors, thus expanding the CFA model to a structural equation model.1 In CFA, the link between the observed indicators/items and the factors are represented by factor loadings. A slope coefficient of regressing an observed indicator on a factor is the factor loading of the indicator on that factor, and the associated residual term is the corresponding measurement error in the indicator. As such, the measure of an observed indicator is separated into measurement error and the measure on the underlying factor. As a result, when we model the relationships between the factor/latent variables, the estimated relationships among latent variables are free of the effects of measurement errors. CFA is the fundamental part (i.e. measurement model) of SEM. One of the prevalent uses of SEM techniques is to study construct validity or to assess factorial structure of scales in the measuring instrument under study. The first step of SEM is to ensure that the measurement models involved in the process are well-established and fit the data well. In real research, when a structural equation model does not fit the data, it is most often due to problems in the corresponding measurement models. In this chapter, we discuss some basics of CFA (Section 2.2) and demonstrate applications of CFA in Mplus using real research data (Section 2.3). Some important issues in CFA modeling, such as how to deal with violation of multivariate normality assumption, censored measures, and binary or ordered categorical measures will be addressed in demonstration of various CFA models (Sections 2.4 and 2.5). We will also expand our discussion to the item response theory (IRT) model, graded response model (GRM), second-order CFA model, and the bifactor model (Sections 2.6–2.8); as well as Bayesian confirmatory factor analysis (BCFA) model (Section 2.9). At the end of the chapter, we will introduce a new concept – plausible values of latent variable – and demonstrate how to generate and save plausible values for further analysis (Section 2.10).

2.2 Basics of CFA models CFA is often used to determine and confirm the factorial structure of an alreadydeveloped measuring instrument in application among a target population. In other 1 Recently, exploratory SEM has been developed (Asparouhov and Muthèn 2009), in which the measurement model is an EFA model.

CONFIRMATORY FACTOR ANALYSIS

35

words, CFA tests whether the theoretically defined or hypothesized factorial structures of the scales in an existing measuring instrument are valid. If the hypothesized CFA model fits the data, we confirm the factorial structure is valid for the population. This is called testing for factorial validity or construct validity of the measuring instrument (Byrne 2006). In this section, we discuss some basics of CFA models with an example of a well-known psychiatric measuring instrument, the Brief Symptoms Inventory 18 (BSI-18) (Derogatis 2000, 2001). The BSI-18 is a shorter version of the instrument Brief Symptoms Inventory 53 (Derogatis 1993; Derogatis and Spencer 1983). The BSI-53 is widely used to assess psychological disorders in clinical and non-clinical populations. It has good psychometric properties, including high internal consistency and test-retest reliability (Derogatis 1993; Derogatis and Spencer 1983). BSI-53 has nine well-defined psychometric subscales. However, the nine subscales are usually computed as composite scores, and their factorial structure has not been confirmed using factor analyses (Boulet and Boss 1991; Ruiperez et al. 2001). As a result, Derogatis (2000) developed a shorter version of the instrument – the BSI-18 – to be used as a screening tool for the most common psychiatric disorders: somatization (SOM), depression (DEP), and anxiety (ANX). The BSI-18 items were taken verbatim from the BSI 53-item instrument. The descriptions of the items are shown in Appendix 2.A. The originally designed factorial structure of the BSI-18 is shown in Table 2.1, in which the 18 measurement items or observed indicators load respectively to three factors (SOM, DEP, and ANX) with six items for each factor. The three theoretically defined subscales (SOM, DEP, and ANX) were confirmed by Derogatis (2000) using principal components analysis (PCA) (Tabachnick and Fidell 2001). Derogatis (2000, 2001) shows that the first two factors (SOM and DEP) remain as defined, but the third factor (ANX) may be split into two factors: Factor 3, underlying agitation (AGI) symptoms (3, Nervousness; 6, Tense; 15, Restlessness); and Factor 4, underlying panic (PAN) symptoms (9, Scared; 12, Panic episodes; 18, Fearful); yielding a four-factor solution (SOM, DEP, AGI, and PAN). Nonetheless, because the last two factors can both be considered ANX, a three-factor structure can be considered valid in BSI-18 (Derogatis 2000, 2001). Factor analyses of the BSI-18 are often based on EFA, and findings are inconsistent. Many suggest that the BSI-18 may ultimately be measuring one underlying factor: the global severity index (GSI) of general psychological distress (Asnerself et al. 2006; Boulet and Boss 1991; Coelho et al. 1998; Prelow et al. 2005). By using Table 2.1

Item

Subscales and corresponding items of BSI-18.

Somatization (SOM) Item #

Faintness (x1 ) Chest pains (x4 ) Nausea (x7 ) Short of breath (x10 ) Numb or tingling (x13 ) Body weakness (x16 )

1 4 7 10 13 16

Depression (DEP) Item Item # Lonely (x5 ) No interest (x2 ) Blue (x8 ) Worthlessness (x11 ) Hopelessness (x14 ) Suicidal thoughts (x17 )

5 2 8 11 14 17

Anxiety (ANX) Item Item # Nervousness (x3 ) Tense (x6 ) Scared (x9 ) Panic episodes (x12 ) Restlessness (x15 ) Fearful (x18 )

3 6 9 12 15 18

36

STRUCTURAL EQUATION MODELING

CFA, a more rigorous approach for assessment of factorial structure based on theory of the underlying latent variable structure, recent investigations of the BSI-18 have validated the three-dimensional structure (i.e. DEP, SOM, and ANX) as originally designed by Derogatis (2000), although an alternative four-factor model (i.e. DEP, SOM, AGI, and PAN) also fit the data (Recklitis et al. 2006; Durá et al. 2006). While both the three-factor and four-factor model fit the data well, the three-factor model is more parsimonious and easier for interpretation than the four-factor model. In addition, as a screening measure, the BSI-18 is not intended to make a distinction between anxiety subtypes (Derogatis 2000). Therefore, the three-factor model is preferred (Recklitis et al. 2006; Durá et al. 2006). In this chapter we will demonstrate how to use CFA to assess the factorial structure of the BSI-18 using real data. The CFA models are presented in the diagrams λx11

x1

δ1

λx52

x5

δ5

λx41

x4

δ4

λx22

x2

δ2

x7

δ7

λx82

x8

δ8

x11

δ11

x14

δ13

x17

δ17

λx71 ξ1

ξ2

λx101 λx131 λx161

x10

δ10

x13

δ13

x16

δ16

λx172

x3

δ3

λx63

x6

δ6

x9

δ9

x12

δ12

x15

δ15

x18

δ18

λx123 λx153 λx183

Figure 2.1

λx142

λx33

λx93 ξ3

λx112

(a) CFA of SOM. (b) CFA of DEP. (c) CFA of ANX.

CONFIRMATORY FACTOR ANALYSIS

37

in Figure 2.1a–c,2 in which the unobserved latent variables or factors enclosed in circles (i.e. 𝜉 1 , 𝜉 2 , and 𝜉 3 ) represent SOM, DEP, and ANX, respectively, while x1 –x18 enclosed in boxes represent the 18 measurement items or observed indicators. In each of the CFA models, one factor has six indicators; therefore, the pieces of observed information (i.e. the number of variances/covariances among the observed indicators) are 6(6 + 1)/2 = 21, which is larger than the total of 12 free parameters (i.e. 5 free factor loadings, 1 factor variance, and 6 residual/error variances) to be estimated in the model.3 Thus, the model degrees of freedom df = 9, and each of the CFA models is over-identified. Figure 2.2 shows a three-factor CFA model in which the three factors are jointly modeled and the relationships between the factors are estimated. The symbols 𝜙11 ,

ξ1

ϕ21

ϕ31 ξ2

ϕ32

ξ3

Figure 2.2

x1

δ1

x4

δ4

x7

δ7

x10

δ10

x13

δ13

x16

δ16

x5

δ5

x2

δ2

x8

δ8

x11

δ11

x14

δ14

x17

δ17

x3

δ3

x6

δ6

x9

δ9

x12

δ12

x15

δ15

x18

δ18

CFA of BSI-18.

2 For simplicity, the models illustrated in the figures are based on the traditional analysis of covariance structure (COVS). 3 When the mean and covariance structures (MACS) are analyzed, the item means will be counted in data points (i.e. 21 + 6 = 27), and the 6 item intercepts will be estimated as free parameters. Since in a single-group model, factor means are all set to zero for the purpose of model identification, the degrees of freedom of the models remains unchanged (df = 9).

38

STRUCTURAL EQUATION MODELING

𝜙31 , and 𝜙32 represent the covariances between factors. This is the model we are going to focus on in this chapter. If the model fits the data well, explanatory variables can be included in the model to predict the factors, and then the model will become a multiple indicators, multiple causes (MIMIC) model; if we replace the relationships (two-way arrows in the path diagraph) between the factors with causal effects (one-way arrows in the path diagraph), then the model will become a structural equation model. From this point of view, we can see that CFA is fundamental in SEM. In the following, we present the three-factor solution CFA in the format of matrix and SEM basic equations: X

Λ

⎡x1 ⎤ ⎡1 ⎢x ⎥ ⎢λ ⎢ 4 ⎥ ⎢ x41 ⎢x7 ⎥ ⎢λx71 ⎢ ⎥ ⎢ ⎢x10 ⎥ ⎢λx101 ⎢x13 ⎥ ⎢λx131 ⎢ ⎥ ⎢ ⎢x16 ⎥ ⎢λx161 ⎢x ⎥ ⎢0 ⎢ 5 ⎥ ⎢ ⎢x2 ⎥ ⎢0 ⎢x ⎥ ⎢0 ⎢ 8 ⎥=⎢ ⎢x11 ⎥ ⎢0 ⎢ ⎥ ⎢ ⎢x14 ⎥ ⎢0 ⎢x ⎥ ⎢0 ⎢ 17 ⎥ ⎢ ⎢x3 ⎥ ⎢0 ⎢x ⎥ ⎢0 ⎢ 6 ⎥ ⎢ ⎢x9 ⎥ ⎢0 ⎢ ⎥ ⎢ ⎢x12 ⎥ ⎢0 ⎢x15 ⎥ ⎢0 ⎢ ⎥ ⎢ ⎣x18 ⎦ ⎣0

0 0 0 0 0 0 1 λx22 λx82 λx112 λx142 λx172 0 0 0 0 0 0

𝜉

𝛿

0 ⎤ ⎡ 𝛿1 ⎤ ⎥ ⎢𝛿 ⎥ 0 ⎥ ⎢ 4 ⎥ ⎢ 𝛿7 ⎥ 0 ⎥ ⎥ ⎢ ⎥ 0 ⎥ ⎢𝛿10 ⎥ ⎥ ⎢𝛿13 ⎥ 0 ⎥ ⎢ ⎥ 0 ⎥ ⎢𝛿16 ⎥ ⎥ ⎢𝛿 ⎥ 0 ⎥ ⎢ 5 ⎥ 0 ⎥ ⎡ ⎤ ⎢ 𝛿2 ⎥ 𝜉 0 ⎥⎥ ⎢ 1 ⎥ ⎢⎢𝛿8 ⎥⎥ ⎢𝜉 ⎥ + 0 ⎥ ⎢ 2 ⎥ ⎢𝛿11 ⎥ ⎥ ⎣𝜉3 ⎦ ⎢ ⎥ 0 ⎥ ⎢𝛿14 ⎥ ⎥ ⎢𝛿 ⎥ 0 ⎥ ⎢ 17 ⎥ 1 ⎥ ⎢ 𝛿3 ⎥ ⎥ ⎢𝛿 ⎥ λx63 ⎥ ⎢ 6 ⎥ ⎢ 𝛿9 ⎥ λx93 ⎥ ⎥ ⎢ ⎥ λx123 ⎥ ⎢𝛿12 ⎥ ⎥ ⎢𝛿15 ⎥ λx153 ⎥ ⎢ ⎥ ⎣𝛿18 ⎦ λx183 ⎦

(2.1)

which is equivalent to x1 = 𝜉1 + 𝛿1

x4 = λx41 𝜉1 + 𝛿4

… x16 = λx161 𝜉1 + 𝛿16

x5 = 𝜉2 + 𝛿5

x2 = λx22 𝜉2 + 𝛿5 … x17 = λx172 𝜉2 + 𝛿17

x3 = 𝜉3 + 𝛿3

x6 = λx63 𝜉3 + 𝛿6

… x18 = λx183 𝜉3 + 𝛿18

(2.2)

where each observed indicator is represented as a linear function of one particular latent variable/factor and a random error. Different from the regular regression, the 𝜉 variables on the right side of the equations are not observed but latent variables. The subscript of a factor loading refers to the item number and its corresponding factor number. For example, 𝜆x183 is the factor loading of indicator y18 on factor 𝜉 3 ,

CONFIRMATORY FACTOR ANALYSIS

39

measuring the magnitude of the expected change in the BSI-18 item Fearful, corresponding to one unit change in factor Anxiety.

2.2.1

Latent variables/factors

The concept of latent variable was initially constructed by Lazarsfeld (1950) in his studies of latent structure analysis. Latent variables are unobserved or unmeasured variables referring to theoretical or hypothetical concepts, such as the factors in the dataset we used for CFA demonstration: somatization (SOM, denoted as 𝜉 1 ), depression (DEP, denoted as 𝜉 2 ), and anxiety (ANX, denoted as 𝜉 3 ). The latent variables are hypothesized to explain the covariances in the observed indicator variables. In SEM, interest focuses on latent constructs, rather than on the observed indicator variables used to measure the constructs. Because the latent variables/factors are unobserved but inferred from their observed indicator variables, they have no metric or unit of measurement. To solve the problem, the most often-used method is to fix one factor loading to 1.0. In our sample, one of the factor loadings (e.g. 𝜆x11 , 𝜆x52 , and 𝜆x33 ) is set to 1.0 for each factor (see Eq. (2.1)). Most of the existing SEM computer programs, including Mplus, fix the factor loading of the first indicator of a factor to 1.0 by default. This scaling sets the scale of the factor to correspond to the scale of this observed indicator. Alternatively, scaling methods such as the latent standardization method and the effect-coding method will be discussed in Section 2.3.

2.2.2

Indicator variables

As latent variables cannot be directly measured, we must define latent variables of interest in terms of observable indicator variables, and assess the latent variables indirectly through measurements of the indicator variables.4 Indicator variables are also referred to as observed variables, measured variables, manifest variables, items, or proxies. Any single indicator is an imperfect measure of the underlying latent variable: that is, it has measurement error. Multiple indicators are used to measure the latent variable in a measurement model (e.g. CFA) model. In our example model, the indicators are denoted as x1 –x18 , and each factor has six indicators (see Figure 2.1a–c). How many indicators are usually needed per factor is still vague and sometime contradictory. Some researchers are in favor of more indicators per factor. According to Marsh et al. (1998), “more is better.” Their studies show that a large number of indicators per factor compensate to some extent for small sample size and vice versa. In addition, MacCallum et al. (1996) show that more indicators per factor 4 In SEM, there are two kinds of indicators of latent variables: causal (formative) indicators and effect (reflective) indicators. The former are observed variables that directly affect their latent variables, while the latter are observed variables that are assumed to be affected by their underlying latent variable(s). Nearly all measurements in real research implicitly assume effect indicators. In this book, only effect indicators are concerned. For more information about formative and reflective indicators, readers are referred to Blalock (1964), Bollen (1984), Bollen and Lennox (1991), and Edwards and Bagozzi (2000).

40

STRUCTURAL EQUATION MODELING

provides a more precise estimate (e.g. the narrowness of confidence intervals [CIs] about fit indices) than a comparable model with fewer indicators per factor. On the other hand, some studies show that the number of indicators per factor has a negative effect on some model fit indices. For example, Anderson and Gerbing (1984) show that a larger number of indicators per factor leads to worse model fit, as indicated by goodness-of-fit (GFI), adjusted goodness-of-fit (AGFI), and root mean square (RMS); and Ding et al. (1995) found that normed fit index (NFI), non-normed fit index (NNFI), relative noncentrality index (RNI), and comparative fit index (CFI) were negatively affected by increasing the indicator per factor ratio. In regard to model identification, at least three indicators per factor are needed in a one-factor CFA model. In order for such a model to be identified, each indicator loads on only one factor, and the measurement error terms are not correlated with each other. For a multifactor CFA model, in which each indicator loads on only one factor, the measurement error terms are not correlated, and all factors are allowed to correlate with each other; then the model can be identified with two indicators per factor. However, a minimum of three indicators per factor is usually required even in a multifactor CFA model (Velicer and Fava 1998), and it has been recommended to have four indicators per factor (e.g. Costner and Schoenberg 1973; Mulaik 1983). In applied research, particularly in psychiatric studies, a measuring instrument often consists of a large number of items, resulting in many indicators per latent construct or per factor. Usually, composite measures or total scores are generated with an acceptable reliability from multiple indicators. For example, a Cronbach’s alpha > 0.70 is a widely used rule of thumb in social studies (Nunnally and Bernstein 1994). CFA models using the same set of indicators are often found not to fit the data well. For instance, the composite measure of the BSI-53 has good Cronbach’s alpha values and dimensionalities of the constructs on the one hand; yet the factorial structure of the scale is often not validated by factor analysis on the other hand. The reason the indicators are not well-behaved in CFA modeling is complicated. One possible solution is to consider reducing the number of indicators per factor to form a parsimonious CFA model. As Hayduk (1996) points out, “it is tough to get even two indicators of most concepts to cooperate, and rare to find three well-behaved indicators of a concept” (p. 30). When many indicators are included in a CFA model, Hayduk (1996, p. 25) suggests “to begin by narrowing down the number of potential indicators by focusing on the best two, or three, indicators.” It is important to keep in mind that this recommendation is for multifactor CFA models.

2.2.3

Item parceling

In real studies, very often the number of indicators designed to measure a theoretical construct is large, thus often resulting in difficulty in modeling. The more indicators per factor, the more parameters need to be estimated; thus a larger sample size is required. An often-encountered problem with a large number of indicators in a CFA model is that the model does not fit the data unless many item residuals/errors

CONFIRMATORY FACTOR ANALYSIS

41

are specified to be correlated (i.e. some item residual covariances5 are set as free parameters). The presence of correlated errors implies that the covariance in the observed indicators is accounted for not only by the underlying common factors and random error, but also by some unknown shared causes in the observed indicators. In addition, the observed indicators often do not have normal distributions, thus violating the assumption on which normal theory maximum likelihood (ML) and generalized least squares estimation are based. Item parceling is a common practice in CFA and SEM practices to deal with a large number of items in a scale, and it can also help address the non-normality problem in the data as well (Bandalos 2002; Bandalos and Finney 2001; Marsh 1994; Nasser and Wisenbaker 2006; Thompson and Melancon 1996). Parceling or bundling items refers to summing or averaging the original item scores from two or more items and using these parcel scores in place of the original individual item scores as new indicators of the underlying latent variables/factors in CFA modeling. Each parcel is likely to be more strongly related to the latent variable and is less likely to be influenced by the idiosyncratic wording and method effects associated with the individual items (Marsh and Hau 1999). Studies have shown that, if the items parceled are unidimensional, parceled items are more likely to conform to the multivariate normality assumptions than the original individual items. Consequently, we will end up with a more parsimonious model with more optimal variable-to-sample-size ratio, along with more-stable parameter estimates, particularly with smaller samples. Such models will also have better-fitting solutions, as measured by root mean square error of approximation (RMSEA), CFI, and the 𝜒 2 test (Bagozzi and Edwards 1998; Bandalos 2002; Hau and Marsh 2004; Thompson and Melancon 1996). Parcels are typically formed a posteriori and ad hoc and can be created in different ways. For example, parcels can be created based on the following approaches: content similarity (Nasser et al. 1997), internal consistency (Kishton and Widaman 1994), factor loadings in a preliminary EFA (Kishton and Widaman 1994), factor loadings and overall model fit indexes in CFA (Kishton and Widaman 1994; Nasser et al. 1997), descriptive statistics of items (e.g. skewness/kurtosis) (Thompson and Melancon 1996; Landis et al. 2000; Nasser and Wisenbaker 2003), and random combinations of items (e.g. split halves or odd-even combinations) (Prats 1990). Placing more similar items together in the same parcel is called isolated parceling, while placing similar items distributed equally across parcels is called distributed parceling. In addition, two sequential items may be averaged to form a parcel, and this is called item-pairs parceling (Hau and Marsh 2004; Marsh and O’Neil 1984). Item parceling is useful and often recommended when sample size is small, error terms are correlated, or the multivariate normality assumption is violated. However, a recent study has shown that item parceling may lead to parameter estimate bias (Bandalos 2008).

5

Item residual covariance and item error covariance are used interchangeably in the book.

42

STRUCTURAL EQUATION MODELING

2.2.4

Factor loadings

The coefficients that link indicators to their underlying latent variables/factors (e.g. 𝜆x14 , … , 𝜆x183 in our example CFA model) are called, in the terminology of factor analysis, factor loadings, representing the relationship of the indicators with the underlying latent variables.6 Different from EFA, many factor loadings in CFA are restricted to 0. A factor loading fixed to 0 suggests that the indicator does not load onto that factor or not influenced by that factor. In a standard CFA model, each of the indicators is specified to load on only one factor, and its measurement error is not correlated with other measurement errors. A CFA model in which indicators have cross-factor loadings is a non-standard CFA model. Cross-loading items is an undesirable feature of a measurement instrument because they lead to a complex factor structure that is difficult, if not impossible, to cross-validate. The factor loading of an indicator to its underlying factor is contingent rather than fixed. The value of the factor loading of a specific indicator may slightly change if additional indicators are added to the model. Usually, factor loadings with a completed standardized solution (i.e. both observed indicators and factors are standardized) are reported in real studies.7 Conventionally, a level of 0.30 is considered as the cut-off point for the magnitude of standardized factor loadings (Kim and Mueller 1978; Brown 2015). Some consider 0.32 as an adequate factor loading cut-off that indicates an item having at least 10% variance (i.e. 0.322 ≈ 0.10) explained by its underlying factor (Tabachnick and Fidell 2001). More conservatively, a value of 0.40 is suggested for the cut-off (Ford et al. 1986). Importantly, in order to be considered an acceptable indicator, the factor loading of the indicator must be statistically significant (e.g. its t-ratio ≥1.96). The standardized factor loadings, including the one whose unstandardized value is fixed to 1.0 for the purpose of identification, are usually less than 1.0. However, standardized factor loadings might have a value greater than 1.0 in magnitude. This does not necessarily mean something is wrong. Factor loadings are correlations only if a correlation matrix is analyzed and the factors are standardized and uncorrelated (orthogonal). When factors are correlated (oblique), the factor loadings are regression coefficients, instead of correlations, and as such they could be larger than 1.0 in magnitude. However, standardized factor loadings larger than 1.0 might indicate a high degree of multi-colinearity in the data (Jöreskog 1999).

2.2.5

Measurement errors

It is a common sense that observed indicator variables in social sciences can never be perfectly measured. No matter how refined the measuring instrument is, and no 6 For observed numeric indicators, the factor loadings are simple linear regression coefficients. If the observed indicators are categorical, then the factor loadings could be PROBIT or logistic regression coefficients depending on what estimator is used. We will discuss this issue later. 7 No matter which indicator’s unstandardized factor loading is set to 1.0, the standardized solution will always be the same because the completed standardized solution rescales the variances of all latent variables and indicators to be 1.0.

CONFIRMATORY FACTOR ANALYSIS

43

matter how careful the procedure of applying it is, the observed indicator variables usually contain sizable measurement errors.8 Even for variables, which can be directly measured, measurement errors are always a concern. Sources of measurement error in surveys are questionnaires (e.g. inappropriately designed questions or wordings), the data-collection methods (e.g. face-to-face, audio computer-assisted self-interview [ACASI], telephone, or online interviews), the interviewers, and the respondents. In a strict sense, we can never measure exactly the true values expected in theory. Without appropriately handling the measurement errors in observed variables, model results could lead to misleading conclusions and thus wrong policy implications. In our BSI-18 example CFA model, the measurement errors of the exogenous indicators x1 –x18 are denoted by 𝛿 1 – 𝛿 18 . Measurement errors of endogenous indicators that measure endogenous latent variables 𝜂 are denoted by 𝜀 (see Table 1.1). A measurement error reflects sources of variance in the observed indicators not explained by the corresponding underlying latent variable/factor. In factor analysis, it is assumed that the indicator variables designed to measure an underlying latent variable/factor share something in common; thus they should be at least moderately correlated with each other. That is, the covariances among the indicators are due to the underlying latent variables because they are all influenced by the same latent variable. Once the indicators load onto their underlying latent variable/factor, they are not supposed to be correlated with each other anymore (see Eq. (2.2)). This is called the local independence assumption. In other words, in a standard CFA model, all item residual/error covariances are fixed to 0 because measurement errors are not correlated with each other. The assumption of local independence can be evaluated by examining the covariances/correlations of item residuals/errors. Residual/error correlations are indicative of response dependence or multidimensionality (i.e. the corresponding items may also measure something else in common that is not represented in the model in addition to the factors that the indicators are designed to measure) (Tennant and Conaghan 2007). Some researchers are not concerned about the error correlation as long as the correlation r < 0.70 (indicating 49% shared variance,9 Linacre 2009). However, an absolute value of r = 0.30 is often considered the cut-off point for error correlation (Miller et al. 2010; Smith 2002). Very often, residual/error correlations are due to measurement artifacts (e.g. similar wordings in items, positively vs. negatively worded items, reading difficulty, items referring to similar contexts, etc.). It is also possible that some items designed to measure a theoretical construct may also measure an unknown construct that is unexpected by the investigator. The likelihood of having correlated item errors increases 8 There are two kinds of measurement errors: random errors and systematic errors. Random errors are errors that are unpredictable fluctuations in measurement, assumed to be randomly scattered about the true value with a normal distribution. Systematic errors are biases in measurement that are either constant or proportional to the true value of the measure. Only random errors are considered in this book. 9 Shared variance or overlap of variation of two variables is the variance in one variable accounted by another variable. The percentage of shared variance is represented by the square of the correlation coefficient r between the two variables. With r = 0.70, the percentage of shared variance is r2 = 0.49.

44

STRUCTURAL EQUATION MODELING

when more items are included in an instrument. As mentioned earlier, in applied research, a CFA model with many indicators often does not fit the data until some item errors are specified to be correlated based on model fit modification indices provided by computer programs. It is important to keep in mind that setting residual/error covariances as free parameters in a CFA must be substantively meaningful. Appropriate specification of residual/error covariances is sometimes useful for testing method effects: for example, when both positive and negative wordings are used in the items of a scale (Marsh 1996; Wang et al. 2001). In addition, when the same construct is measured in a longitudinal study, measurement errors are likely to be correlated given the nature of the data. The ability of specifying such residual/error covarinaces is one of the advantages of CFA. However, it is not recommended to correlate item error terms in application of CFA simply for the purpose of model fit improvement.

2.2.6

Item reliability

The classical definition of measurement reliability is the extent to which the variance of the observed variable is explained by the true score that the variable is supposed to measure (Lord and Novick 1968). Let’s define an observed variable x = 𝜆x 𝜉 + 𝛿: then Var(x) = (𝜆x )2 𝜙 + 𝜃𝛿 , where 𝜙 is the variance of the latent variable 𝜉 and 𝜃 𝛿 is the variance of measurement error 𝛿; the percent explained variance in x is (𝜆x )2 𝜙/Var(x), which is reported as squared multiple correlation in output of SEM computer programs. When the observed variable loads on only one factor, this value can be interpreted as the reliability of the observed variable as an indicator of the underlying latent variable/factor. This is called the structural equation definition of item reliability (Bollen 1989). In the complete standardized solution, 𝜙 = 1 and Var(x) = 1, and thus (𝜆x )2 𝜙/Var(x) = (𝜆x )2 , which is the squared standardized factor loading of x on 𝜉. The squared factor loading is also called the communality of the indicator. However, the term communality is more often used in situations where an indicator cross-loads onto multiple factors. In such a case, the communality for an indicator is computed as the sum of the squared factor loadings for that indicator. This is equivalent to R-square or the squared multiple correlation, measuring the percent of variance of the observed variable explained by all the underlying latent variables/factors that the indicator loads on. In social studies, measurement reliabilities for single indicators are often estimated based on test-retest measures of the same items under the assumption of parallel or tau-equivalent (see Appendix 2.B). One of the advantages of CFA is being able to estimate reliabilities of the observed variables using cross-sectional data. More importantly, it provides a general formula for item reliability estimates where indicator variables can be parallel, tau-equivalent, or congeneric measures.

2.2.7

Scale reliability

Scale reliability or construct reliability refers to the reliability of a construct or latent variable. When a scale is measured by multiple items, a popular measure that has been widely used in social sciences to measure scale reliability is Cronbach’s

CONFIRMATORY FACTOR ANALYSIS

45

alpha coefficient (Cronbach 1951; see Appendix 2.C). Cronbach’s alpha is simple to calculate, but it does not provide a dependable estimate of scale reliability. If indicators are not tau-equivalent or parallel measures, which is often the case in applied research, Cronbach’s alpha would underestimate the scale reliability when measurement errors of the corresponding indicators are uncorrelated. With correlated measurement errors, Cronbach alpha would either underestimate or overestimate the scale reliability depending on measurement parameters (Raykov 2001). To overcome the disadvantage of Cronbach’s alpha, scale reliability can be estimated based on the results of CFA (Jöreskog 1971a; Dillon and Goldstein 1984; Bollen 1989). When measurement errors are not correlated, the CFA-based scale reliability can be calculated as: )2 ( ∑ 𝜆i Var(𝜉) i (2.3) 𝜌= ( )2 ∑ ∑ 𝜆i Var(𝜉) + 𝜃i i

i

where 𝜆i is the unstandardized factor loading of the ith indicator, and 𝜃 i is the unstandardized residual variance of the ith indicator, estimated from the CFA model. When the results of the standardized solution are used, Eq. (2.3) becomes )2 ( ∑ 𝜆i i 𝜌= ( (2.4) )2 ∑ ∑ 𝜆i + 𝜃 i i

i

When measurement errors are correlated, Eq. (2.4) is modified as (Raykov 2004) ( )2 ∑ 𝜆i i 𝜌= ( (2.5) )2 ∑ ∑∑ ∑ 𝜆i + 𝜃 i + 2 𝜃ij where a new term 2

∑∑ i

j

i

i

i

j

𝜃ij , which is two times the sum of the residual covarinaces,

is included in the denominator. Equations (2.3–2.5) show how to calculate point estimates of scale reliability using CFA model results. Although the CI of scale reliability is not commonly reported in literature, the CI can be estimated using CFA modeling results (Raykov 2002, 2004). We will show later in our example that both the scale reliability and its CI can be readily estimated in Mplus.

2.3 CFA models with continuous indicators Having introduced the basic concepts of CFA models, let’s turn our attention to application of CFA with continuous measures using Mplus. In this section, we demonstrate

46

STRUCTURAL EQUATION MODELING

how to run the example CFA model proposed in Section 2.2 using real research data from a natural history study of rural illicit drug users in Ohio, USA. Such a population is important for testing BSI-18, given the high rates of psychological distress both as a consequence of drug use and as a pre-existing condition for which they are self-medicating (Grant et al. 2004). A total sample of 248 drug users was recruited from three rural counties in Ohio: respondent-driven sampling (RDS) was used for sample recruitment (Heckathorn 1997, 2002; Wang et al. 2007). The detailed description of recruitment approaches and sample characteristics can be found in previous publications (Siegal et al. 2006). The following Mplus program estimates a three-factor (i.e. SOM, DEP, and ANX) CFA model, in which all the BSI-18 items are treated as continuous variables. Mplus Program 2.1 TITLE: 3-factor CFA with continuous indicator variables. DATA: FILE = BSI_18.dat; VARIABLE: NAMES = X1-X18 Gender Ethnic Age Edu Crack ID; MISSING= ALL (-9); USEVARIABLES = X1-X18; ANALYSIS: !ESTIMATOR = ML; !default; !TYPE=GENERAL MISSING; !default; MODEL: SOM BY X1 X4 X7 X10 X13 X16; !Somatization; DEP BY X5 X2 X8 X11 X14 X17; !Depression; ANX BY X3 X6 X9 X12 X15 X18; !Anxiety; OUTPUT: SAMPSTAT TECH1 STDY FSDETERMINACY MOD;

The TITLE command provides a label for the program. Although it is optional, it is always a good idea to give some notes in the TITLE command. The DATA command tells the program where to read the data. The data must be in ASCII format (American Standard Code for Information Interchange) or text format. The FILE statement in the DATA command specifies the data file name. In our example, both data file (BSI_18.dat) and program file (Mplus .inp file) are stored in the same folder on our computer, and thus path specification is not necessary here. In the VARIABLE command, the statement NAMES specifies all the variable names included in the data. Note that the order of the variables specified in the program must match the order in which they appear in the data. Only these variables that are used in the model will be specified in the USEVARIABLES statement; the order of variables does not matter in this line. In this example, variables x1 –x18 in the dataset are used for modeling. The MISSING statement specifies any user-specified missing values in the data. For example, missing values in the data are coded as −9 and specified using MISSING = ALL (-9) in the VARIABLE command. Missing values

CONFIRMATORY FACTOR ANALYSIS

47

denoted by symbols such as “⋅” and “*” can be specified using MISSING = ⋅ and MISSING = * in the VARIABLE command. The ANALYSIS command specifies what type of analysis will be implemented. For example, the default is TYPE = GENERAL, which that covers analyses included in the Mplus Base Program, such as regression, path analysis, CFA, SEM, growth modeling, survival analysis, etc. TYPE = MIXTURE will be applied for mixture modeling in Chapter 6. For other types of analysis, such as TWOLEVEL, THREELEVEL, COMPLEX, CROSSCLASSIFIED, etc., readers are referred to the Mplus User’s Guide. Various estimators can be specified in the ANALYSIS command for model estimation. For continuous outcomes, the default is the ML estimator. TYPE = MISSING is the default option of handling missing data for all analyses, assuming ignorable missing – i.e. missing completely at random (MCAR) or missing at random (MAR). The full information maximum likelihood (FIML) (Arbuckle 1996; Little and Rubin 2002) is used in conjunction with ML estimator for model estimation. FIML uses every piece of information available in the data for modeling. FIML is superior to other approaches, such as listwise deletion, pairwise deletion, and similar response pattern imputation (Enders and Bandalos 2001). With FIML, MAR, instead of MCAR, can be assumed. MAR allows missingness to be dependent on both observed outcome measures and covariates (Arbuckle 1996; Little and Rubin 2002). Note that, like many other SEM programs, early version of Mplus estimates SEM based on analysis of COVS by default. Starting with version 5, by default Mplus estimates SEM based on analysis of MACS.10 Analysis of MACS is necessary when a model includes a mean structure, such as in multigroup models and latent growth models. For a single-group model, the results based on analysis of MACS are the same as those based on analysis of COVS, except that an intercept parameter is estimated for each indicator variable. In Mplus, the model is specified in the MODEL command. The 18 indicators load on three factors (SOM, DEP, and ANX), respectively, via the BY statements. The factor loadings for the first indicator of each factor are fixed to 1.0 by default for the purpose of model identification. In the OUTPUT command, the SAMPSTAT statement allows us to print sample statistics in the output file; TECH1 reports parameter specification; STDY requests a standardization solution, in which the variances of the continuous latent variables and the variances of the outcome variables (e.g. indicators/items of latent variables) are used for standardization;11 and the MOD option prints modification indices. The FSDETERMINACY option requests a factor score determinacy value for each factor, which is the correlation between the estimated factor scores and the unobservable true factor 10

Mplus still allows model estimation based on analysis of COVS by including the statements

MODEL = NOMEANSTRUCTURE and INFORMATION = EXPECTED in the ANALYSIS command. However, the statements must be used in conjunction with LISTWISE = ON in the DATA com-

mand to handle missing values in analysis of COVS. 11 Different standardizations are available in Mplus. With the option STDYX in the OUTPUT command, all outcome variables, continuous latent variables, and covariates will be standardized; with the option STDY, all outcome variables and continuous latent variables will be standardized; with the option STD, only continuous latent variables will be standardized. If the option STANDARDIZED is used, Mplus will produce all three different standardization solutions.

48

STRUCTURAL EQUATION MODELING

scores ranging from 0 to 1. It is also considered a measure of internal consistency. A factor score determinacy value ≥0.80 suggests strong correlations of items with their respective factor, denoting high internal consistency (Gorsuch 1983). The estimation of the model terminated normally. Selected model results are shown in Table 2.2. The MODEL FIT INFORMATION section of the Mplus output provides information about the overall model fit. The 𝜒 2 statistics for the target model or the model being tested is 𝜒 2 = 301.051, df = 132 (p = 0.000), which rejects the null hypothesis of a good fit. As discussed in Chapter 1, the model 𝜒 2 statistic is highly sensitive to sample size, and the significance of the 𝜒 2 test should not be a reason by itself to reject a model. Note in the Mplus output the 𝜒2 Test of Model Fit for the Baseline Model 𝜒 2 = 2243.924, df = 153 (p < 0.001) is much larger than the 𝜒2 Test of Model Fit 𝜒 2 = 301.051, df = 132 (p < 0.001). The baseline CFA model is defined as a model in which all factor loadings are set to 1, all variances/covariances of the latent variables/factors are set to 0, and only the intercepts and residual variances of the indicators (observed outcome variables in CFA) are estimated. For example, suppose we specify our example CFA as the following: … MODEL: SOM BY X1@1 X4@1 X7@1 X10@1 X13@1 X16@1; DEP BY X5@1 X2@1 X8@1 X11@1 X14@1 X17@1; ANX BY X3@1 X6@1 X9@1 X12@1 X15@1 X18@1; SOM@0; DEP@0; ANX@0; SOM with DEP@0 ANX@0; DEP with ANX@0;

Then the estimated 𝜒2 statistic of the target model will be identical to that of the baseline model (𝜒 2 = 2243.924, df = 153, p < 0.001). In regard to other model fit indices, Table 2.2 shows that both CFI = 0.919 and TLI = 0.906 are ≥0.90, indicating an acceptable fit. The estimated value of RMSEA (0.072) is within the range of fair fit (0.05–0.08). However, the upper limit of its 90% CI (0.061, 0.083) is out of bounds (i.e. >0.08), and the close-fit test (P = 0.001) shows a rejection of the close fit (RMSEA ≤0.05). Yet (SRMR = 0.049) is less than 0.08, indicating a good fit. As both CFI and TLI are greater than 0.90, and both RMSEA and the standardized root mean square residual (SRMR) are less than 0.08, overall, the model fit is acceptable, but not quite satisfactory because the close-fie test p-value is less than 0.05. The MODEL RESULTS section of the Mplus output shows the estimated factor loadings, variances, and covariances of the latent variables, as well as residual variances, along with their standard errors, t-ratios, and p values. As the model estimation is based on analysis of MACS, an intercept parameter is also estimated for each of the items (x1 –x18 ). Factor means are all set to zero in a single-group model by default

CONFIRMATORY FACTOR ANALYSIS Table 2.2

49

Selected Mplus output: three-factor CFA.

MODEL FIT INFORMATION Number of Free Parameters

57

Loglikelihood H0 Value H1 Value

-5973.598 -5823.073

Information Criteria Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC (n* = (n + 2) / 24)

12061.196 12261.462 12080.770

Chi-Square Test of Model Fit Value Degrees of Freedom P-Value

301.051 132 0.0000

RMSEA (Root Mean Square Error Of Approximation) Estimate 0.072 90 Percent C.I. 0.061 Probability RMSEA 1 THEN U4=1; U7=0; IF X7>1 THEN U7=1; U10=0; IF X10>1 THEN U10=1; U13=0; IF X13>1 THEN U13=1; U16=0; IF X16>1 THEN U16=1; U5=0; IF X5>1 THEN U5=1; U2=0; IF X2>1 THEN U2=1; U8=0; IF X8>1 THEN U8=1; U11=0; IF X11>1 THEN U11=1; U14=0;IF X14>1 THEN U14=1; U17=0; IF X17>1 THEN U17=1; U3=0; IF X3>1 THEN U3=1; U6=0; IF X6>1 THEN U6=1; U9=0; IF X9>1 THEN U9=1; U12=0; IF X12>1 THEN U12=1; U15=0;IF X15>1 THEN U15=1; U18=0; IF X18>1 THEN U18=1; !ANALYSIS: ESTIMATOR = WLSMV; !Default; !Parameterization=Delta; !Default; MODEL: SOM BY U1 U4 U7 U10 U13 U16;!Somatization; DEP BY U5 U2 U8 U11 U14 U17;!Depression; ANX BY U3 U6 U9 U12 U15 U18;!Anxiety; OUTPUT: STD; SAVEDATA: DIFFTEST=TEST.DAT;!Save info Chi-square difference !test;

CONFIRMATORY FACTOR ANALYSIS

73

The U variables are new binary items created from the original observed items x1 –x18 and are specified as categorical variables in the CATEGORICAL statement of the VARIABLE command. The default estimator for categorical data analysis is WLSMV. For modeling categorical outcomes using WLS estimators, Mplus provides two types of parameterization: Delta parameterization and Theta parameterization (Muthén and Asparouhov 2002; Muthén and Muthén 1998–2017).15 In Delta parameterization, which is the default in Mplus, a scale factor (or scale parameter) for a continuous latent response variable (y* ) is estimated in the model, but residual variance of the continuous latent response variable is not. On the other hand, in Theta parameterization, the residual variance of (y* ) is a parameter, but the scale factor is not. Delta parameterization has some advantages over Theta parameterization in model estimation. It has been recommended that we should go with the default Delta parameterization unless we see an Mplus warning requiring Theta parameterization. However, Theta parameterization is preferred when hypotheses involving residual variances are of interest in multigroup modeling or longitudinal data analysis. In addition, for models in which a categorical variable is used as a mediating variable – that is, the categorical variable is both influenced by and influences other variables – only Theta parameterization can be used for model estimation (Muthén and Muthén 1998–2017, p. 675). In the OUTPUT command, the option STD is used to standardize latent variables, while STDY standardizes both latent variables and outcome variables. Since correlations between the unobserved y* continuous response variables (e.g. tetrachoric correlations) are analyzed, STD and STDY provide the same results. When a robust estimator such as WLSMV, MLMV, or ULSMV is used for model estimation, the model 𝜒 2 difference can’t be directly used for model comparison (Muthén and Muthén 1998–2017); instead, Mplus provides a two-step approach to conduct a model comparison test. In this program, the DIFFTEST option on the SAVEDATA command is used to specify a file name (TEST.DAT) in which the derivatives of the H1 model will be saved. This information will be retrieved in the next Mplus program, where a more restrictive H0 model will be specified. The model results show that the model fits the data very well: CFI = 0.990; TLI = 0.988; RMSEA = 0.030 (90% CI: 0.008, 0.045), close-fit test P = 0.988; SRMR = 0.070. With binary indicators, the relationships between the observed response variables and their underlying latent construct variable/factor are nonlinear. With a WLS estimator (such as WLSMV), Mplus uses the probit function to link the observed binary indicators to their underlying latent variables/factors. Correlations between the unobserved y* continuous response variables (i.e. tetrachoric correlations) are analyzed rather than the variance/covariance of the observed indicators. As each binary indicator has only two categories (i.e. 0 vs. 1), one threshold (𝜏) is estimated for each 15

In Delta parameterization, the scale factor Δ is a free parameter √ defined as the inverted standard deviation of the latent response variable y* : that is, Δ = 1∕ 𝜎 * ; and the residual variance 𝜃 is obtained as a remainder: 𝜃 = Δ−2 − 𝜆2 ψ, where 𝜆 is factor loading and ψ is factor variance. In Theta parameterization, the residual variance is a free parameter, and the scale factor is obtained as a remainder: Δ−2 = 𝜆2 ψ + 𝜃 (Muthén and Asparouhov 2002).

74

STRUCTURAL EQUATION MODELING

indicator. The negative value of the threshold (−𝜏) is equivalent to the intercept of regressing the item on its underlying factor (see Appendix 2.D). The factor loading (𝜆) of the each item here is the probit slope coefficient of regressing the item on its underlying factor. In the model, the estimated R-SQUARE is R2 for the latent continuous response variable y* that equals the squared standardized factor loading. For example, the R2 for U1 is 0.8542 = 0.729 (see Table 2.9). In the following, we conduct the second step of model comparison using WLSMV for model estimation. Mplus Program 2.12 TITLE: Model Comparison when using estimator WLSMV DATA: FILE = BSI_18.dat; VARIABLE: NAMES = X1-X18 gender Ethnic age edu crack id; MISSING= ALL (-9); USEVARIABLES =U1 U4 U7 U10 U13 U16 U5 U2 U8 U11 U14 U17 U3 U6 U9 U12 U15 U18; CATEGORICAL = U1 U4 U7 U10 U13 U16 U5 U2 U8 U11 U14 U17 U3 U6 U9 U12 U15 U18; DEFINE: U1=0; IF X1>1 THEN U1=1; U4=0; IF X4>1 THEN U4=1; U7=0; IF X7>1 THEN U7=1; U10=0; IF X10>1 THEN U10=1; U13=0; IF X13>1 THEN U13=1; U16=0; IF X16>1 THEN U16=1; U5=0; IF X5>1 THEN U5=1; U2=0; IF X2>1 THEN U2=1; U8=0; IF X8>1 THEN U8=1; U11=0; IF X11>1 THEN U11=1; U14=0; IF X14>1 THEN U14=1; U17=0; IF X17>1 THEN U17=1; U3=0; IF X3>1 THEN U3=1; U6=0; IF X6>1 THEN U6=1; U9=0; IF X9>1 THEN U9=1; U12=0; IF X12>1 THEN U12=1; U15=0; IF X15>1 THEN U15=1; U18=0; IF X18>1 THEN U18=1; ANALYSIS: ESTIMATOR = WLSMV;!default; DIFFTEST=TEST.DAT; !Retrieve information saved in !Program 2.9; MODEL: SOM BY U1 U4 U7 U10 U13 U16;!Somatization; DEP BY U5 U2 U8 U11 U14 U17;!Depression; ANX BY U3 U6 U9 U12 U15 U18;!Anxiety; SOM DEP ANX (V1);

The more restrictive H0 model is specified by imposing equality restrictions on the variances of all three factors. The label (V1)16 in the SOM DEP ANX statement of the MODEL command requests to set the variances of the factors SOM, DEP, and ANX equal to each other. The DIFFTEST option in the ANALYSIS command retrieves the file TEST.DAT created by Mplus Program 2.11 to calculate the 𝜒 2 difference between 16

The labels in parentheses are arbitrary symbols (such as letters or numbers) or a combination of symbols.

CONFIRMATORY FACTOR ANALYSIS Table 2.9

75

Selected Mplus output: CFA with binary indicators.

THE MODEL ESTIMATION TERMINATED NORMALLY … RMSEA (Root Mean Square Error Of Approximation) Estimate 90 Percent C.I. Probability RMSEA 1 THEN U3=1; U6=0; IF X6>1 THEN U6=1; U9=0; IF X9>1 THEN U9=1; U12=0; IF X12>1 THEN U12=1; U15=0; IF X15>1 THEN U15=1; U18=0; IF X18>1 THEN U18=1; ANALYSIS: ESTIMATOR=ML; MODEL: Anx by U3-U18*; !Free all factor loadings; ![Anx@0]; !Fix factor mean to 0, Default; Anx@1; ! Fix factor variance to 1.0; OUTPUT: STDY; Plot: TYPE = PLOT2; ! Plot the IRT-relevant curves; Plot: TYPE = PLOT3; ! Plot descriptive information for the !latent variable;

The five-level categorical items are recoded as dichotomous measures in the DEFINE command. A ML estimator with the default logit link function is used

CONFIRMATORY FACTOR ANALYSIS

81

for model estimation, all factor loadings are freed, the factor mean is fixed to 0 by default in a single-group model, and the factor variance is fixed to 1. For the single-factor CFA with binary items, Mplus provides results both in a factor model parameterization and in a conventional IRT parameterization. The PLOT command requests graphical displays of observed data and analysis results, including ICCs and information curves. Mplus provides results of both the CFA model and the 2P-IRT model (see Table 2.11). As Eqs. (2.12) and (2.13) show, the estimate of discrimination parameter a for each item is identical to the corresponding unstandardized factor loading, and the difficulty parameter b equals the ratio of the corresponding factor loading to the threshold. For example, for item U3 , its discrimination parameter a and unstandardized factor loading are both 1.483; the item’s difficulty parameter b equals its threshold divided by its unstandardized factor loading: τ∕λ = 0.462∕1.483 = 0.311. The 2PL IRT model’s ICCs for the six items are shown in Figure 2.5. All the ICCs are S-shaped as expected, and none of them crosses over other curves, indicating that the assumption of item parameter invariance is not violated. The discriminating power is high for all the items as all the ICCs have a deep slope. While the ICC of item U3 is a little flatter than others, the slopes of other ICCs seem similar to each other. If we are interested in testing whether all items have equal discriminating power, an equality restriction can be imposed on all the factor loadings (see Mplus Program 2.8); thus, the model reduces to the 1PL Rasch model (Rasch 1960; Bond and Fox 2007). The difference in model fit between the 2PL and 1Pl IRT models can be tested using the LR test discussed in Section 2.3.2. From Figure 2.5, we can see that items U6 , U15 , and U3 are easier, among which U6 is the least difficult with the smallest difficulty parameter b = 0.109. On the other hand, items U18 , U9 , and U12 are more difficult, among which item U18 is the most difficult with the largest difficulty parameter b = 1.104. The probability of giving a correct response to items U18 , U9 , and U12 is substantially smaller at any given anxiety level, compared to items U6 , U15 , and U3 . IRT estimates the value of the underlying latent construct/trait value for each respondent. The precision of the estimation is measured by information. We will show later that more information implies less error of measurement. Information based upon a single item j at a given level of latent trait 𝜃 is defined as (de Ayala 2009) Ij (θ) = aj 2 Pj (θ)Qj (θ)

(2.14)

where aj is the discrimination parameter for item j, Pj (𝜃) is the IRF, and Qj (𝜃) = 1 − Pj (𝜃). Figure 2.6 shows the item information functions (IIFs) for the six items in our 2PL IRT example model. For each item, the IIF peaks at the item’s difficulty level; that is, an item measures the latent construct/trait with greatest precision at the 𝜃 level corresponding to the item’s difficulty parameter bj . The amount of item information decreases as the 𝜃 level departs from the item’s bj and approaches zero at the extremes of the 𝜃 scale. An item with a larger discrimination parameter aj provides a larger amount of information. In our example, items U12 and U15 have the largest aj , and thus their IIFs are the tallest, while item U3 has the smallest aj and flattest IIF, as shown in the Figure 2.6.

82

STRUCTURAL EQUATION MODELING

Table 2.11

Selected Mplus output: 2PL IRT model.

MODEL RESULTS Two-Tailed P-Value

Estimate

S.E.

Est./S.E.

U3

1.484

0.278

5.339

0.000

U18

2.184

0.462

4.726

0.000

Means ANX

0.000

0.000

999.000

999.000

0.462

0.185

2.500

0.012

2.412

0.397

6.074

0.000

1.000

0.000

999.000

999.000

1.484

0.278

5.339

0.000

ANX

BY

…

Thresholds U3$1 … U18$1 Variances ANX IRT PARAMETERIZATION Item Discriminations ANX

BY U3

… U18 Means ANX Item Difficulties U3 … U18 Variances ANX

2.184

0.462

4.726

0.000

0.000

0.000

0.000

1.000

0.311

0.125

2.485

0.013

1.104

0.151

7.321

0.000

1.000

0.000

0.000

1.000

Because of the assumption of local independence, for a test with a set of J items, the test information I(𝜃) at a given latent construct/trait level 𝜃 is simply the sum of the item information at that level: I(θ) =

J ∑ j=1

Ij (θ)

(2.15)

CONFIRMATORY FACTOR ANALYSIS

83

Prob 1 0.9

U3 U6 U9 U12 U15 U18

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –3.5

–3

–2.5

–2

–1.5

–1

–0.5 0 0.5 Anxiety (θ)

Figure 2.5

1

1.5

2

2.5

3

3.5

Anxiety item ICCs.

Information 2 1.8 1.6

U3 U6 U9 U12 U15 U18

1.4 1.2 1 0.8 0.6 0.4 0.2 0 –3.5

–2.5

–1.5

Figure 2.6

–0.5 0.5 Anxiety (θ)

1.5

Anxiety item information function (IIF) curve.

2.5

3.5

84

STRUCTURAL EQUATION MODELING

where I(𝜃) measures the precision of the assessment of the latent construct/trait 𝜃, and Ij (𝜃) indicates the jth item’s contribution to I(𝜃). Clearly, a multiple-item scale measures the theoretical construct more precisely than does a single item; and a scale with more items provides a larger amount of information than those with fewer items. Plotting I(𝜃) against 𝜃 results in a test information function (TIF) curve. The TIF curve of the BSI-18 Anxiety scale in our example is shown in Figure 2.7. The amount of information varies across the entire range of the latent construct/trait scale. In general, larger amounts of information are associated with the construct/trait levels closer to the middle of the 𝜃 scale. Once test information is known, the standard error of estimation (SEE) of the latent construct/trait can be calculated as follows (Thissen 2000): 1 SEE(θ) = √ I(θ)

(2.16)

where the unit of SEE is logit, because a logit link is used in the 2PL IRT model. According to Thissen (2000), the SEE can be used to calculate test/scale reliability using the following formula: R = 1 − SEE(θ)2

(2.17)

Information

From Figure 2.7 and Eqs. (2.16) and (2.17), we can see that different latent trait 𝜃 scores are estimated with differing degrees of precision, and thus the test/scale reliability varied across the 𝜃 scale. For the BSI-18 Anxiety scale in our example, test information is about 3.5 when 𝜃 = −0.48 or 𝜃 = 1.85. Thus, the SEE for 𝜃 = −0.48 or 𝜃 = 1.85 is about 8 7.5 7 6.5 6 5.5 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 –3.5

–2.5

–1.5

Figure 2.7

–0.5 0.5 Anxiety (θ)

1.5

Anxiety test information (TIF) curve.

2.5

3.5

CONFIRMATORY FACTOR ANALYSIS

85

√ SEE(θ = −0.48 or θ = 1.85) = 1∕ 3.5 = 0.53 logits, and the reliability of estimating 𝜃 = − 0.48 or 𝜃 = 1.85 is R = 1-0.532 = 0.72. The TIF has the largest information TIF = 7.5, corresponding to the latent construct/trait level 𝜃 = 0.75. √ Thus, the SEE(θ = 0.75) = 1∕ 7.5 = 0.37 logits, and the reliability of estimating 𝜃 = 0.75 is R = 0.86. The reliability of the BSI-18 Anxiety scale is below 0.72 for respondents whose Anxiety level is below −0.48 or greater than 1.85 on the 𝜃 scale. Traditionally, scale reliability is measured using a single index (e.g. Cronbach’s alpha), which is helpful in characterizing a scale’s average reliability. Estimation of scale reliability using test information is the extension of the concept of reliability. In IRT, reliability is not uniform across the theoretical construct/trait 𝜃 scale but can be quantified corresponding to any particular range of the 𝜃 scale. 2.6.1.2

The two-parameter normal ogive (2PNO) IRT model

The two IRT parameters can also be estimated using a probit link, resulting in the two-parameter normal ogive (2PNO) model, in which the IRF is defined as follows: P(Yj ∣ θ) = Φ(aj θ − bj )

(2.18)

where, Φ denotes a CDF of the standard normal distribution. Replacing the ML estimator in Mplus Program 2.14 with a WLS estimator (e.g. the default WLSMV), the probit link will be used by default, and the following 2PNO IRT model can be estimated: P(Yj ∣ θ) = Φ(−τj + λj θ)

(2.19)

Using a WLS estimator with the default delta parameterization, setting the factor mean to 0 and the factor variance to 1, the Mplus estimates of parameters 𝜏 and 𝜆 can be transformed to IRT parameters a and b using the following formulas (Asparouhov and Muthén 2016): 1 (2.20) aj = √ −2 𝜆j − 1 bj =

𝜏j 𝜆j

(2.21)

The b difficulty parameters estimated from the 2PNO IRT model are identical to those from the 2PL IRT model; however, the a discrimination parameters are smaller. A scaling factor D, usually D = 1.702, is often used to scale the logistic IRT discrimination parameter a to the normal ogive IRT a (Camilli 2017).18 This means the logistic IRT discrimination parameter a is approximately 1.702 times larger than the normal ogive IRT a. Interested readers may try the 2PNO IRT model by replacing ESTIMATOR=ML in Mplus Program 2.14 with ESTIMATOR=WLSMV, and compare the parameter estimates between the 2PNO IRT and 2PNO IRT models. 18 Different values of scaling factor D are suggested: D = 1.702 (usually rounded to 1.70) by Haley (1952); D = 1.814 by Cox (1970); D = 1.70 by Johnson and Kotz (1970); and D = 1.749 by Savalei (2006).

86

STRUCTURAL EQUATION MODELING

2.6.2

The graded response model (GRM)

In this section we will extend the IRT modeling for dichotomous items to ordinal categorical or polytomous items. Responses to such items are called graded responses. Various IRT models, such as the partial credit model (PCM, Masters 1992), the rating scale model (RSM, Andrich 1978), the generalized partial credit model (GPCM, Muraki 1992), and the GRM (Samejima 1969, 1996; Baker and Kim 2004; du Toit 2003) have been developed for ordinal categorical outcomes. Here we briefly discuss and demonstrate the well-known GRM using our BSI-18 Anxiety scale data. 2.6.2.1

The two-parameter logistic form of graded response model (2PL GRM)

The two-parameter logistic form of GRM (2PL GRM) is an extension of the 2PL IRT model in which items are graded with more than two response categories. Each polytomous item is treated as concatenated dichotomous items. For an ordinal item Uj with m = 1, 2, … , M categories, the item can be split into (M – 1) binary measures like this: 1 vs. (2, 3, … , M); (1, 2) vs. (3, 4, … , M); … ; [1, 2, 3, … , (M – 1)] vs. M. The GRM models the probability of endorsing any given response category or higher. In our example, each of the BSI-18 items has five response categories (0 − Not at all, 1 − A little bit, 2 – Moderately, 3 − Quite a bit, 4 − Extremely). For item j of the BSI-18 Anxiety scale, the two-parameter logistic form of the GRM (2PL GRM) can be described as P(uj ≥ 1 ∣ θ) =

1 1 + exp[−aj (θ − bj1 )]

P(uj ≥ 2 ∣ θ) =

1 1 + exp[−aj (θ − bj2 )]

P(uj ≥ 3 ∣ θ) =

1 1 + exp[−aj (θ − bj3 )]

P(uj ≥ 4 ∣ θ) =

1 1 + exp[−aj (θ − bj4 )]

(2.22)

where the first category (uj = 0) of the item is treated as the reference category. There is only one slope parameter aj , which is the discrimination parameter of the item. The parameters bj1 –bj4 are (M – 1) = 4 category difficulty parameters (between-category threshold parameters, or category boundaries), each of which represents the value of the latent construct/trait 𝜃 that is needed to have a probability of 0.50 to cross over into next highest response category on item j. For example, the parameter bj1 represents the expected 𝜃 value at which the probability of endorsing a value of 1 or higher is 0.50. For any given value of 𝜃 > bj1 , the response to item j is most likely to transition from a value of 0 to 1 or a higher value on item j. From the cumulative probabilities shown in Eq. (2.22), the probabilities of endorsing each specific category (i.e. 0, 1, 2, 3, 4) of item j can be calculated as

CONFIRMATORY FACTOR ANALYSIS

87

P(uj = 0 ∣ θ) = 1 − P(uj ≥ 1 ∣ θ) P(uj = 1 ∣ θ) = P(uj ≥ 1 ∣ θ) − P(uj ≥ 2 ∣ θ) P(uj = 2 ∣ θ) = P(uj ≥ 2 ∣ θ) − P(uj ≥ 3 ∣ θ) P(uj = 3 ∣ θ) = P(uj ≥ 3 ∣ θ) − P(uj ≥ 4 ∣ θ) P(uj = 4 ∣ θ) = P(uj = 4 ∣ θ) − 0

(2.23)

Plotting these probabilities against 𝜃, we obtain ICCs that are also called the category response functions (CRFs), category characteristic curves (CCCs), or operating characteristic curves (OCCs) of the item. With five categories, each graded response BSI-18 item would have five ICCs. To demonstrate the 2PL GRM, we implement a single-factor CFA with ordinal categorical items. For the Mplus format of the formulas for the 2PL GRM, the last category of ordinal items is treated as the reference category (Asparouhov and Muthén 2016). Mplus does not provide the 2PL GRM parameters aj and bjm : they have to be calculated from the threshold (𝜏 jm ) and factor loading (𝜆j ) estimates, using the following formulas (Asparouhov and Muthén 2016) aj = 𝜆j bjm =

𝜏jm 𝜆j

(2.24) (2.25)

where the factor mean and variance are set to 0 and 1, respectively. In the following Mplus program, we demonstrate the 2PL GRM using our BSI-18 Anxiety scale. Mplus Program 2.15 TITLE: 2PL graded response model (GRM) DATA: FILE = BSI_18.dat; VARIABLE: NAMES = U1-U18 gender Ethnic age edu crack id; MISSING= ALL (-9); USEVARIABLES = U3 U6 U9 U12 U15 U18; CATEGORICAL = U3-U18; ANALYSIS: ESTIMATOR = ML; MODEL: ANX BY U3-U18*; !Anxiety; ![ANX@0]; !Zero factor mean by default; ANX @1; !Set factor variance to 1; OUTPUT: STDY; PLOT: TYPE = PLOT2; !Plot the IRT-relevant curves;

88

STRUCTURAL EQUATION MODELING

TYPE = PLOT3; !Plot descriptive information for the latent variable;

The BSI-18 Anxiety items are treated as ordinal categorical measures denoted by Us as Mplus practitioners conventionally do. The ML estimator with the default logit link function is specified in the ANALYSIS command, and thus the 2PL GRM is estimated. All factor loadings are set as free parameters, the factor mean is fixed to 0 by default, and the factor scale is fixed to 1. The unstandardized factor loading and threshold estimates are shown in Table 2.12. According to Eq. (2.24), the 2PL GRM discrimination parameter aj for item j is the factor loading of the item. For example, the factor loading for item u3 is 𝜆3 = 2.065, and thus a3 = 2.065. The four category difficulty parameters or between-category threshold parameters of item u3 are calculated using Eq. (2.25) as follows: bu3_1 = bu3_2 = bu3_3 = bu3_4 =

Table 2.12

𝜏3_1 𝜆3 𝜏3_2 𝜆3 𝜏3_3 𝜆3 𝜏3_4 𝜆3

=

−1.516 = −0.734 2.065

=

0.547 = 0.265 2.065

=

1.981 = 0.959 2.065

=

4.126 = 1.998 2.065

(2.26)

Selected Mplus output: 2PL GRM model.

MODEL RESULTS Two-Tailed Estimate ANX U3 U6 U9 U12 U15 U18

S.E.

Est./S.E.

P-Value

BY

Thresholds U3$1 U3$2 U3$3 U3$4

2.065 2.202 2.371 2.676 1.856 1.807

0.259 0.280 0.364 0.419 0.237 0.271

7.979 7.876 6.524 6.394 7.828 6.679

0.000 0.000 0.000 0.000 0.000 0.000

-1.516 0.547 1.981 4.126

0.241 0.213 0.261 0.413

-6.296 2.575 7.605 9.994

0.000 0.010 0.000 0.000

CONFIRMATORY FACTOR ANALYSIS

89

Probability 1 0.9 U3, Category 1 U3, Category 2 U3, Category 3 U3, Category 4 U3, Category 5

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –1

0 Anxiety θ

–3

–2

1

2

Figure 2.8

Item characteristic curves (ICCs) of 2PL GRM: BSI-18 anxiety item u3 .

3

The ICCs for item u3 in our 2PL GRM are shown in Figure 2.8. Each ICC represents the probability of endorsing a specific category of item u3 conditional on the level of latent trait 𝜃. Note that the form of the ICC varies across categories. The ICC for the first category (m = 0) monotonically decreases, but monotonically increases for the last category (m = 4), while for the other categories, the ICCs are bell-shaped. For people with a low anxiety level (i.e. smaller values on the 𝜃 scale), the probability of endorsing the first category of item u3 (i.e. u3 = 0) is high; on the other hand, for those with a high anxiety level (i.e. larger values on the 𝜃 scale), the probability of endorsing the last category of item u3 (i.e. u3 = 4) is high. People with an average level of anxiety (i.e. 𝜃 = 0), are more likely to endorse category 2 of the item than other categories (Figure 2.8). Note that the ICCs of GRM do not have a consistent form over the m categories. The first ICC is based on the probability of endorsing the first category (or 1 minus the probability of endorsing the second or higher categories), and the last ICC is based on the probability of endorsing the last category. That is, these two ICCs are based on cumulative probability. The other ICCs are based on the probability of endorsing a specific category. Therefore, only the first and last difficulty parameters can be observed in the ICCs. In our example, the first difficulty parameter bu3_1 ≈ −0.73 is the location point on the latent trait 𝜃 scale at which the probability of endorsing value 0 of item u3 is 0.50; and the last difficulty parameter bu3_4 ≈ 1.99 is the location point on the latent trait 𝜃 scale at which the probability of endorsing value 4 of u3 is 0.50 (Figure 2.8). The cumulative probability of response category m or higher categories can be calculated using Eq. (2.22) to generate cumulative item response curves called the boundary response function (BRF) (Samejima 1969). The BRF for our 2PL GRM

90

STRUCTURAL EQUATION MODELING Probability

1.0 0.9 0.8

P(U=>1|θ)

P(U=>2|θ) P(U=>3|θ)

P(U=>4|θ)

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 –3

–2

Figure 2.9

–1

0 Anxiety θ

1

2

3

Boundary response function (BRF) of 2PL GRM: anxiety item u3 .

model is shown in Figure 2.9. The category difficulty parameters or between-category threshold parameters (bu3_1 –bu3_4 ) calculated using Eq. (2.26) can be observed in the BRF shown in the figure: bu3_1 ≈ −0.73, bu3_2 ≈ 0.27, bu3_3 ≈ 0.96, bu3_4 ≈ 1.99. The TYPE = PLOT2 statement in the PLOT command in Mplus Program 2.15 plots the IRT-relevant curves, including the item information and scale information functions. The scale reliability can be readily estimated using Eqs. (2.16) and (2.17). 2.6.2.2

The two-parameter normal ogive form of graded response model (2PNO GRM)

The two-parameter GRM can also be estimated by estimating a CFA with a probit link function in Mplus. By replacing a ML estimator in Mplus Program 2.15 with a WLS estimator (e.g. WLSMV) and the default probit link, Mplus will estimate 2PNO GRM. The formulas for the 2PNO GRM are described in Asparouhov and Muthén (2016) by type of parameterization (e.g. delta or theta parameterization). Again, Mplus does not directly provide the 2PNO GRM parameter estimates aj and bjm , and they have to be calculated from the parameter (e.g. factor loadings, thresholds) estimates of the CFA model with categorical indicator variables, using the following formulas (Asparouhov and Muthén 2016): 1 aj = √ −2 𝜆j − 1 bjm =

𝜏jm 𝜆j

(2.27)

(2.28)

CONFIRMATORY FACTOR ANALYSIS

91

where the mean and variance of the factor are set to 0 and 1, respectively; aj is the discrimination parameter of item j; and bjm is the difficulty parameter for endorsing the mth or higher categories of item j. The discrimination parameters of the 2PNO GRM differ from those of the 2PL GRM. Again, a scaling factor D, usually D = 1.702, is often used to scale the logistic GRM discrimination coefficient a to the normal ogive GRM a (Camilli 2017).

2.7 Higher-order CFA models In a CFA model with multiple factors, the variance/covariance structure of the factors may be further analyzed by introducing second-order factor(s) into the model if (i) the first-order factors are substantially correlated with each other, and (ii) the second-order factors may be hypothesized to account for the variation among the first order factors. For example, the three factors (SOM, DEP, ANX) of the BSI-18 scale in our example are highly correlated with each other, and, theoretically speaking, there may exist a more generalized construct (such as general severity of mental health) that underlies depression, anxiety, and somatization; as such, a second-order factor (e.g. general severity) may be specified to account for the covariation among the three first-order factors. If there are multiple second-order factors and a covariance structure exists among them, then third-order factors might be considered. This kind of model is called a higher-order or hierarchical CFA model and was first introduced by Jöreskog (1971a). Although the level of hierarchical orders in higher-order factor analysis is unlimited in principle, usually a second-order CFA model is applied in real research. Let’s use the BSI-18 to demonstrate the second-order CFA model shown in Figure 2.10.19 This model consists of two factorial structures: (i) the observed indicators (the BSI-18 items) are indicators of the three first-order factors (i.e. SOM, DEP, ANX); and (ii) the three first-order factors are indicators of the second-order factors (GSI). The covariances among the first-order factors are not perfectly explained by the second-order factor, and thus each of the first-order factors has a residual term (i.e. 𝜁 1 , 𝜁 2 , and 𝜁 3 ); and just as in first-order CFA, the error terms are not supposed to be correlated with each other. The rules of model identification for the first-order CFA model apply to the higher-order factorial structures. In this example of a second-order CFA model, the first-order factorial structure is over-identified as each factor has six indicators, while the second-order factorial structure is just identified because the second-order factor GSI has only three indicators (i.e. the first-order factors SOM, DEP, ANX). The following Mplus program runs a second-order CFA with continuous indicators.

19

All indicator variables/items are denoted by y’s in the model diagram because the first-order factors (SOM, DEP, ANX) are not exogenous but endogenous latent variables in the model.

92

STRUCTURAL EQUATION MODELING

ζ1 SOM

ζ2 GSI

DEP

ζ3 ANX

Figure 2.10

y1

ε1

y4

ε4

y7

ε7

y10

ε10

y13

ε13

y16

ε16

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

y3

ε3

y6

ε6

y9

ε9

y12

ε12

y15

ε15

y18

ε18

Second-order CFA of BSI-18.

Mplus Program 2.16 TITLE: BSI-18: Second-Order CFA DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 gender Ethnic age edu crack id; MISSING= ALL (-9); USEVARIABLES = Y1-Y18; ANALYSIS: ESTIMATOR = MLR; MODEL: SOM BY Y1 Y4 Y7 Y10 Y13 Y16; !Somatization; DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; GSI By SOM DEP ANX; !Global severity index; OUTPUT: TECH1 STDY MOD;

CONFIRMATORY FACTOR ANALYSIS

93

The first-order factors (SOM, DEP, ANX) are specified as indicators of the second-order factor GSI. Like the first-order factors, the second-order factor GSI is a latent variable, and a metric must be assigned to it by (i) fixing the factor loading of one first-order factors to 1.0 (SOM in this example by default); or (ii) fixing the variance of the second-order factors to 1.0. To test the GFI of a second-order model, we can conduct a LR test by calculating the 𝜒 2 difference between the second-order and the first-order CFA models since the models are nested. However, in order to be able to test the model-fit improvement, four or more first-order factors are needed to establish an over-identified second-order factorial structure. With only three first-order factors in our example model, the second-order factorial structure is just identified, and thus we are unable to test whether the second-order factorial structure contributes to improvement of model fit relative to the first-order CFA. Readers will find that the model fit statistics/indices are identical to those of the first-order CFA. The model results show that the first-order factors are highly loaded onto the second-order factor with factor loadings ranging from 0.81 to 0.97. The proportions of variance in the first-order factors explained by the second-order factors are 0.72, 0.66, and 0.94, respectively, indicating that the higher-order solution provides a good account for the covariances among the first-order factors (see Table 2.13). The relationships of the first-order and second-order factors with the observed indicators in a second-order CFA model can be further evaluated using the Schmid and Leiman (1957) transformation, which was initially developed for use in EFA models. Application of this method to higher-order CFA model is described in detail by Brown (2015). The basic idea of the Schmid-Leiman transformation is to decompose the total item variance into two components: variance explained by second-order factors and variance explained by first-order factors. Table 2.14 shows the Schmid-Leiman transformation of our example secondorder CFA model estimates with complete standardization solution. Columns A and B are standardized first- and second-order factor loadings, respectively, for each observed BSI-18 item. Column C, which is the squared value of Column A, represents the total variance of the item explained by the factors. For example, about 57% of the variance in item y16 , but only 42% of the variance in item y18 , was explained by the first- and second-order factors in the model. The factor loading of an item onto the second-order factor can be calculated as the product of the standardized first- and second-order factor loadings; and the squared value of this product is the item variance explained by the second-order factor (Column D). Knowing the total explained variance and the variance explained by the second-order factor, the variance of an item explained by the first-order factor can be readily calculated. For example, the total and second-order factor explained variances of item y1 are 49.8% (Column C) and 35.7% (Column D), respectively, and thus its variance explained by the first-order factor SOM is (49.8 − 35.7%) = 14.2% (see Column G in Table 2.14).

94

STRUCTURAL EQUATION MODELING

Table 2.13

Selected Mplus output: second-order CFA with continuous indicators.

STDY Standardization

SOM

Two-Tailed P-Value

Estimate

S.E.

Est./S.E.

0.706 0.595 0.729 0.704 0.576 0.755

0.044 0.055 0.043 0.046 0.056 0.040

15.964 10.812 17.111 15.287 10.289 18.669

0.000 0.000 0.000 0.000 0.000 0.000

0.827 0.759 0.881 0.737 0.569 0.380

0.036 0.040 0.029 0.044 0.067 0.084

22.889 18.744 30.697 16.867 8.478 4.547

0.000 0.000 0.000 0.000 0.000 0.000

0.701 0.722 0.681 0.699 0.680 0.648

0.046 0.041 0.050 0.043 0.042 0.053

15.320 17.626 13.680 16.088 16.169 12.209

0.000 0.000 0.000 0.000 0.000 0.000

0.846 0.810 0.969

0.041 0.047 0.038

20.603 17.320 25.601

0.000 0.000 0.000

Estimate

S.E.

Est./S.E.

0.715 0.657 0.939

0.069 0.076 0.073

10.301 8.660 12.800

BY Y1 Y4 Y7 Y10 Y13 Y16

DEP

BY Y5 Y2 Y8 Y11 Y14 Y17

ANX

BY Y3 Y6 Y9 Y12 Y15 Y18

GSI

BY SOM DEP ANX

… R-SQUARE … Latent Variable SOM DEP ANX

Two-Tailed P-Value 0.000 0.000 0.000

Table 2.14

Schmid-Leiman transformation of the second-order CFA model estimates.

A

Item SOM Y1 Y4 Y7 Y10 Y13 Y16 DEP Y5 Y2 Y8 Y11 Y14 Y17 ANX Y3 Y6 Y9 Y12 Y15 Y18

B

C

D

E

F

G

H

SQRT of unexplained variance of firstorder √ factor (1-B2 )

Residualized first-order factor loading (A*E)

Item variance explained by first-order factor (F2 )

Item variance not explained by factors [1−(D + G) or 1−C]

First-order factor loading

Secondorder factor loading

Total variance explained by factors (A2 )

Item variance explained by second-order factor (A*B)2

0.706 0.595 0.729 0.704 0.576 0.755

0.846 0.846 0.846 0.846 0.846 0.846

0.498 0.354 0.531 0.496 0.332 0.570

0.357 0.253 0.380 0.355 0.237 0.408

0.533 0.533 0.533 0.533 0.533 0.533

0.376 0.317 0.389 0.375 0.307 0.403

0.142 0.101 0.151 0.141 0.094 0.162

0.502 0.646 0.469 0.504 0.668 0.430

0.827 0.759 0.881 0.737 0.569 0.380

0.810 0.810 0.810 0.810 0.810 0.810

0.684 0.576 0.776 0.543 0.324 0.144

0.449 0.378 0.509 0.356 0.212 0.095

0.586 0.586 0.586 0.586 0.586 0.586

0.485 0.445 0.517 0.432 0.334 0.223

0.235 0.198 0.267 0.187 0.111 0.050

0.316 0.424 0.224 0.457 0.676 0.856

0.701 0.722 0.681 0.699 0.680 0.648

0.969 0.969 0.969 0.969 0.969 0.969

0.491 0.521 0.464 0.489 0.462 0.420

0.461 0.489 0.435 0.459 0.434 0.394

0.247 0.247 0.247 0.247 0.247 0.247

0.173 0.178 0.168 0.173 0.168 0.160

0.030 0.032 0.028 0.030 0.028 0.026

0.509 0.479 0.536 0.511 0.538 0.580

Results are based on a standardized solution.

96

STRUCTURAL EQUATION MODELING

The item variance explained by the first-order factor can also be derived using the residualized first-order factor loading, which can be calculated by multiplying the standardized first-order factor loading (Column A in Table 2.14) and the square root of the unexplained variance of the first-order factor (i.e. the squared value 1.0 minus the squared value of the second-order factor loading; see Column E in Table 2.14). For √ example, the residualized first-order factor loading of y1 is calculated as: 0.706* 1 − 0.8462 = 0.376 (see Column F), whose squared value is the variance of this item explained by the first-order factor SOM (14.2%; see Column G). The unexplained item variances (Column H in Table 2.14) can be calculated as (1 − Column C) and are equal to (1 − item R squared) = (1 − C) = (1 − 0.498) = 0.502 or (1 − item R squared) = [1 − (D + G)] = [1 − (0.357 + 0.142)] = 0.502. This proves that the Schmid-Leiman transformation does not alter the explanatory power of the original CFA solution (Brown 2015).

2.8 Bifactor models In the second-order CFA model, where covariances between the first-order factors are explained by a higher-order factor, there are indirect relationships between the higher-order factor and the observed indicators/items mediated by the first-order factors. An alternative model for the representation of the factor structure of the instruments is the bifactor model (Holzinger and Swineford 1937; Reise et al. 1993; Chen et al. 2006). In a bifactor model, a general factor and multiple group factors (or domain-specific factors) compete to explain the variance of the indicators, and no factor is higher than the other. The general factor reflects a single general dimension that is common to all items, while a specific group factor explains the covariances in its own set of domain-specific items that are not accounted for by the general factor (Patrick et al. 2007; Pomplun 2007). Just like the fact that in a second-order CFA model, the first-order factors are not correlated once they load to the second-order factor, the group factors in a bifactor model are not correlated, controlling for the general factor. Therefore, the covariances between the group factors in a bifactor CFA model are all fixed to 0.20 A bifactor CFA using the data from BSI_18.dat is shown in Figure 2.11 and estimated using the following Mplus program. Mplus Program 2.17 TITLE: BSI-18: Bifactor CFA DATA: FILE = BSI_18.dat; VARIABLE: NAMES = X1-X18 Gender Ethnic Age Edu Crack ID; MISSING= ALL (-9); USEVARIABLES = X1-X18; ANALYSIS: ESTIMATOR = MLR; 20

Our exploratory modeling results show that the covariances between the factors in our bifactor model were actually not statistically significant.

CONFIRMATORY FACTOR ANALYSIS

x1

ε1

x4

ε4

x7

ε7

x10

ε10

x13

ε13

x16

ε16

x5

ε5

x2

ε2

x8

ε8

x11

ε11

x14

ε14

x17

ε17

x3

ε3

x6

ε6

x9

ε9

x12

ε12

x15

ε15

x18

ε18

SOM

DEP

ANX

GS

Figure 2.11

Bifactor CFA of BSI-18.

MODEL: !General factor: Global severity index GSI by X1-X18*(lam1-lam18); !Group factors: Somatization, Depression, and Anxiety SOM By X1* X4 X7 X10 X13 X16 (lamS1-lamS6); DEP By X5* X2 X8 X11 X14 X17 (lamD1-lamD6); ANX By X3* X6 X9 X12 X15 X18 (lamA1-lamA6); GSI with SOM-ANX@0; SOM-ANX with SOM-ANX@0; SOM@1; Anx@1; Dep@1; GSI@1; X1(v1); X4(v2); X7(v3); X10(v4); X13(v5); X16(v6); X5(v7); X2(v8); X8(v9); X11(v10); X14(v11); X17(v12);

97

98

STRUCTURAL EQUATION MODELING

X3(v13); X6(v14); X9(v15); X12(v16); X15(v17); X18(v18); MODEL CONSTRAINT: !Omega Hierarchical; NEW(GSUM SSUM DSUM ASUM RSUM G_OMEGA S_OMEGA D_OMEGA A_OMEGA ECV); GSUM=(lam1+lam2+lam3+lam4+lam5+lam6+lam7+lam8+lam9+lam10+ lam11+lam12+lam13+lam14+lam15+lam16+lam17+lam18) ̂ 2; SSUM=(lamS1+lamS2+lamS3+lamS4+lamS5+lamS6) ̂ 2; DSUM=(lamD1+lamD2+lamD3+lamD4+lamD5+lamD6) ̂ 2; ASUM=(lamA1+lamA2+lamA3+lamA4+lamA5+lamA6) ̂ 2; RSUM=(v1+v2+v3+v4+v5+v6+v7+v8+v9+v10+v11+v12+v13+v14+v15+v16+v17+v18); G_OMEGA=GSUM/(GSUM+SSUM+DSUM+ASUM+RSUM); S_OMEGA=SSUM/(GSUM+SSUM+DSUM+ASUM+RSUM); D_OMEGA=DSUM/(GSUM+SSUM+DSUM+ASUM+RSUM); A_OMEGA=ASUM/(GSUM+SSUM+DSUM+ASUM+RSUM); ECV=GSUM/(GSUM+SSUM+DSUM+ASUM);

Each item loads on the general factor (e.g. global severity) and one domain-specific group factor (SOM, DEP, or ANX). Covariances among the group factors are fixed to 0, and none of the group factors is correlated with the general factor. All factor loadings and residual variances are labeled in the program that are used in the MODEL CONSTRAINT command to calculate the omega hierarchical coefficient (ωh), the omega hierarchical subscale coefficients (ωs) for subscale Somatization, and the explained common variance (ECV) index. The omega hierarchical ωh represents the proportion of variance of the total scores explained by the single general factor in the bifactor model (Zinbarg et al. 2005; Reise et al. 2010). Based on Reise (2012), ωh for our example bifactor model can be computed as (∑ )2 λG (2.29) ωh = (∑ )2 (∑ )2 (∑ )2 (∑ )2 ∑ λs + λD + λA + θ2 λG + where 𝜆G in the numerator is the unstandardized factor loading of the general factor; 𝜆S , 𝜆D , and 𝜆A in the denominator are the unstandardized factor loadings of the three domain-specific factors (i.e. SOM, DEP, or ANX), respectively, and 𝜃 2 are item error/residual variances. The omega hierarchical ωh is a better estimate of scale reliability than Cronbach’s alpha because it integrates group factors into true score variation for reliability estimation, and it does not assume equal factor loadings for all items (tau-equivalent) as the Cronbach’s alpha does. The omega hierarchical subscale coefficients (ωs) can also be calculated for each subscale that provides an estimate of subscale reliability, controlling for the general factor (Reise 2012). For example, we can use the following formula to calculate the omega hierarchical subscale ωs for the Somatization scale in our example: (∑ )2 λS (2.30) ωs = (∑ )2 (∑ )2 (∑ )2 (∑ )2 ∑ λs + λD + λA + θ2 λG + Bifactor models provide useful information about the extent to which items are multidimensional. The ECV is an index of unidimensionality. It can be calculated

CONFIRMATORY FACTOR ANALYSIS

99

using the estimated factor loadings of the general and group factors of a bifactor model (Bentler 2009; Reise et al. 2010): (∑ )2 λG (2.31) ECV = (∑ )2 (∑ )2 (∑ )2 (∑ )2 λs + λD + λA λG + where the denominator is the sum of the squared unstandardized factor loadings of both the general factor and all the group factors. The ECV estimates the proportion of the common variance in the bifactor model that is attributable to the single general factor, and thus it is considered an indicator of unidimensionality. A high ECV indicates that data have a strong general factor compared to group factors. The bifactor model fits the data very well: model 𝜒 2 = 187.107 (df = 117, P = 1.1944); RMSEA = 0.049; 90% CI: (0.036, 0.062), close-fit test P = 0.528; CFI = 0.955; TLI = 0.941; SRMR = 0.040. The information criteria statistics (AIC = 12 013.621, BIC = 12 266.588, adjusted BIC = 12 038.346) of the model are very close to those of the second-order model, indicating that the two models fit the data equally well. However, as the second-order model is nested within the corresponding bifactor model (Yung et al. 1999; Chen et al. 2006), the two models can be directly compared using the LR test. Note that the robust estimator MLR is used for model estimation; therefore, Eqs. (2.6)–(2.8) should be used for the LR test. Selected results of the bifactor model are shown in Table 2.15. Factor loadings on the general factor (the global severity) are stronger than those on the grouping factors: 17 out of 18 such factor loadings are greater than 0.50. In contrast, two out of six factor loadings on each of the first two group factors (i.e. Somatization and Depression) are not statistically significant, and none of the loadings on the third group factor (i.e. Anxiety) are statistically significant. Only one item (x8 ) has a larger loading on a domain-specific factor than on the general factor. The estimated omega hierarchical coefficient (ωh); omega hierarchical subscale coefficients (ωs) for Somatization (ωs), Depression (ωd), and Anxiety (ωa); and ECV index are shown at the bottom of Table 2.15. The omega hierarchical (ωh = 0.868) is above the recommended level of 0.80 (Reise et al. 2010). The omega hierarchical subscale coefficients (ωs) for the Somatization, Depression, and Anxiety scales are small: ωs = 0.032, ωd = 0.034, and ωa = 0.008, respectively. The results show very little variance explained by the group factors beyond the general factor. In addition, the explained common variance index ECV = 0.922, indicating that 92.2% of the common variance is explained by the general factor. Such findings suggest a strong single common factor (e.g. global severity) in the BSI-18, although its factorial structure is clearly multidimensional (Somatization, Depression, and Anxiety) in content. That is, we may consider unidimensionality of the BSI-18 for the population under study. A bifactor model is a plausible and useful alternative to the traditional second-order CFA model. It is a better foundational model for conceptualizing dimensionality because it enables us to maintain a unidimensional structure while also recognizing multidimensionality due to item content diversity (Reise et al. 2010).

100

STRUCTURAL EQUATION MODELING

Table 2.15

Selected Mplus output: bifactor model.

MODEL FIT INFORMATION … RMSEA (Root Mean Square Error Of Approximation) Estimate 90 Percent C.I. Probability RMSEA 0.05, then the model is appropriate; if convergence is slow or there is no convergence, then one runs the model again using a larger d value. If model estimation convergence is fast, but PPP < 0.05, then one runs the model again using a smaller d value. This is also called sensitivity analysis for BSEM (Asparouhov et al. 2015). If models with different values of d all converge fast and all have PPP > 0.05, the deviance information criterion (DIC) can be used for model comparisons for the purpose of determining the more appropriate value of d. In this section, we demonstrate how to implement BCFA on the basis of the CFA model shown in Figure 2.2 where all cross-factor loadings and item residual covariances are fixed to 0. In BCFA, instead of fixing those parameters exactly to 0, they are only approximately fixed to 0 by specifying small priors for Bayesian estimation. As mentioned earlier, the priors for cross-loadings in BCFA are commonly specified as 𝜆 ∼ N(0, 0.01). The priors for the residual 𝜃 matrix defined in Eq. (2.32) are usually treated as DDPs calculated from the estimates of the residual 𝜃 matrix of CFA with FIML. Following Asparouhov et al. (2015), we estimated the d values of Eq. (2.32) from the CFA using the following Mplus program. Mplus Program 2.18 TITLE: BSI-18: Estimate residual data-dependent priors DATA: FILE =BSI_18.dat; VARIABLE: NAMES = X1-X18 Gender Ethnic Age Edu Crack ID; MISSING= ALL (-9); USEVARIABLES = Y1-Y18; DEFINE: Y1=X1; Y2=X4; Y3=X7; Y4=X10; Y5=X13; Y6=X16; Y7=X5; Y8=X2; Y9=X8; Y10=X11; Y11=X14; Y12=X17; Y13=X3; Y14=X6; Y15=X9; Y16=X12; Y17=X15; Y18=X18; STANDARDIZE Y1-Y18; ANALYSIS: ESTIMATOR=MLR; MODEL: SOM BY Y1* Y2-Y6; DEP BY Y7* Y8-Y12; ANX BY Y13* Y14-Y18; SOM@1; DEP@1; ANX@1;

All items are standardized and factor variances are fixed to 1. In order to make it easier to label residual parameters later in the program for BCFA, the indicator variables y1 –y18 created in this program are numbered in sequence by factor. The residual 𝜃 matrix parameters estimated from the program are used to calculate the DDPs for the residual 𝜃 matrix defined in Eq. (2.32). And then, we implement the BCFA model starting with degrees of freedom of the inverse-Wishart distribution d = 100 in the following Mplus program.

CONFIRMATORY FACTOR ANALYSIS

105

Mplus Program 2.19 TITLE: BSI-18: Bayesian CFA DATA: FILE =BSI_18.dat; VARIABLE: NAMES = X1-X18 Gender Ethnic Age Edu Crack ID; MISSING= ALL (-9); USEVARIABLES = Y1-Y18; DEFINE: Y1=X1; Y2=X4; Y3=X7; Y4=X10; Y5=X13; Y6=X16; Y7=X5; Y8=X2; Y9=X8; Y10=X11; Y11=X14; Y12=X17; Y13=X3; Y14=X6; Y15=X9; Y16=X12; Y17=X15; Y18=X18; STANDARDIZE Y1-Y18; ANALYSIS: ESTIMATOR=BAYES; PROCESSORS=2; !CHAINS=2; !default number of chains; !POINT=MEDIAN; !default point estimate; BITERATIONS=(10000);!refers to the minimum number of total !iterations; !FBITERATIONS=100000; !requests a fixed number of Bayes !iterations; !KOLMOGOROV = 1000; !default number of draws from chains for !KS test; !THIN=10; !keep every 10th sample/draw from each chain; MODEL: SOM BY Y1* Y2-Y6 Y7-Y18(Sxload1-Sxload12);!minor loadings; DEP BY Y7* Y8-Y12 Y1-Y6 Y13-Y18(Dxload1-Dxload12);!minor loadings; ANX BY Y13* Y14-Y18 Y1-Y12(Axload1-Axload12);!minor loadings; SOM@1; DEP@1; ANX@1; Y1-Y18(RVar1-RVar18); !residual variances; Y1-Y18 with Y1-Y18(RCVar1-RCVar153); !residual covariances; MODEL PRIORS: Sxload1-Sxload12∼N(0,0.01); !Prior for cross-loading; Dxload1-Dxload12∼N(0,0.01); Axload1-Axload12∼N(0,0.01); !Prior for residual variance; RVar1∼IW(50.2,100); RVar2∼IW(64.6,100); RVar3∼IW(46.8,100); RVar4∼IW(50.5,100); RVar5∼IW(66.8,100); RVar6∼IW(43,100); RVar7∼IW(31.7,100); RVar8∼IW(42.5,100);

106

STRUCTURAL EQUATION MODELING

RVar9∼IW(22.4,100); RVar10∼IW(45.7,100); RVar11∼IW(67.5,100); RVar12∼IW(85.6,100); RVar13∼IW(50.9,100); RVar14∼IW(47.8,100); RVar15∼IW(53.6,100); RVar16∼IW(51.1,100); RVar17∼IW(53.8,100); RVar18∼IW(57.9,100); !Priors for residual covariance; RCVar1-RCVar153∼IW(0,100); OUTPUT: TECH8;

The Bayes estimator is specified in the ANALYSIS command for model estimation. By default, Mplus uses two independent Markov chains for Bayesian estimation. The option BITERATIONS=(10000) refers to a minimum number of 10 000 iterations, including the discards for the Monte Carlo sampling. While the default maximum number of iterations is 50 000 for BITERATIONS, a fixed larger number of iterations can be requested by FBITERATIONS. The number of iterations (the second half of the total iterations) used for the posterior distribution depends on when convergence occurs. The default option POINT=MEDIAN in the ANALYSIS command indicates that the point estimate of the parameter is the median. The default can be changed by using the POINT option (e.g. POINT=MEAN or POINT=MODE). The KOLMOGOROV option can be used to specify the number of samples/draws from the chains for Kolmogorov-Smirnov (KS) test. By default, 100 posterior samples/draws from each chain are used in KS tests to compare posterior distributions across chains. To reduce sample autocorrelation in MCMC that could result in biased Monte Carlo standard errors, a common strategy is to thin the Markov chain by keeping only every kth simulated sample in each chain. By default, Mplus does not thin the Markov chain. The THIN=K option can be used to keep every Kth sample from the posterior distribution. The option TECH8 in the OUTPUT command provides both a KS test and potential scale reduction (PSR) for assessing convergence of the posterior distribution. A total of 36 cross-loadings are labeled Sxload1-Sxload12, Dxload1Dxload12, and Axload1-Axload12, respectively, and an informative prior of N(0,0.01) is specified for each of the cross-loadings in the MODEL PRIORS command. There are 18 residual variances labeled RVar1-RVar18, respectively. The number of residual covariances is [18*(18–1)]/2 = 153, labeled RCVar1-RCVar153, respectively. The informative priors for the residual variances and covariances are specified in the IW(dD,d) distribution, where d is set to 100 and the values of D are the ML estimates of residual variances/covariances. For example, the CFA residual variance of item x1 estimated from Mplus Program 2.18 is 0.502, and the corresponding value of dD specified in the IW prior is 0.502 × 100 = 50.2; thus RVar1∼IW(50.25,100) is specified as the informative prior for residual variance RVar1 in the MODEL PRIORS command in the Mplus program. The informative priors for other residual variances are specified in the same way. The IW priors for the residual covariances are easier because the CFA estimates residual covariances

CONFIRMATORY FACTOR ANALYSIS

107

are all zero, and thus all 18*(18-1)/2=153 IW priors are specified as IW(0,100) in the MODEL PRIORS command. The TECH8 option in the OUTPUT command requests to print convergence information, including the KS test and PSR, as well as the optimization history of model estimation in the output. Selected model results of the BCFA are shown in Table 2.16. We first check the convergence of model estimation before evaluating model fit. The KS test p-values are greater than 0.05 for all the parameter estimates, indicating the posterior distributions in different chains are similar or convergence of the simulation. Note that we may often encounter small KS P-values for some parameters and see “improper prior” associated with some parameters under “Simulated prior distributions” of the TECH8 output. This can be ignored because the “KS test is too strict and improper priors can still lead to proper posteriors which is all that matters.” 21 To evaluate convergence of model estimation, we usually focus on PSR. In our example, after 10 000 iterations, the highest PSR in model estimation is 1.003 with parameter 189 (see the bottom of Table 2.16),22 indicating appropriate convergence of the simulation (Gelman et al. 2004). When the TECH8 option is specified, Mplus 8 also provides PSR for “Iterations for computing PPPP” (prior posterior predictive p-value) (Hoijtink and van de Schoot 2018; Asparouhov and Muthén 2017). PPPP is a targeted test for the specific minor parameters, such as cross-loadings and residual variances/covariances. In the current version of Mplus, PPPP is available only for testing intercept, slopes, and factor loadings, rather than for residual variances/covariances. In our example BCFA model, the highest PSR for computing PPPP is PSR = 1.003 with parameter 7 after 10 000 iterations, indicating MCMC iterations for estimation of the minor parameters (i.e. cross-loadings in this example) have appropriately converged. Having confirmed that model estimation has appropriately converged, we evaluate the model fit with posterior predictive checking (PPC) (Gelman et al. 1996). The 95% CI of the difference between the observed and model-generated 𝜒 2 values covers zero (−64.583, 38.931), and the PPP is 0.698. We conclude that the model fits the data well. The PPPP is also produced in the Mplus output. A value of PPPP = 0.926 indicates that we can’t reject the hypothesis of N(0, 0.01) (i.e. the prior distribution we specified for the cross-loadings); in other words, the estimates of cross-loadings in our model are not outside the N(0, 0.01) distribution. As the model fits the data well, we conclude that setting the degrees of freedom hyperparameter of the inverse-Wishart distribution d = 100 is appropriate. For sensitivity analysis, we can continue that model estimation with different values of d (e.g. d = 50, d = 150, d = 200, … ). The model with the smallest DIC is more preferable. The model results show that all the major factor loadings (i.e. the factor loading of each item to its underlying factor) are statistically significant and greater than the conventional cut-off point 0.30. None of the cross-factor loadings or minor factor loadings is statistically significant. Although all the error correlations are small, there 21

See Muthén (2015): http://www.statmodel.com/discussion/messages/11/12237.html? 1485882536. 22 With a smaller number of iterations, model estimation would be faster, but PSR may prematurely indicate convergence because PSR can bounce over iterations. To ensure convergence, we re-run the model using 50 000 iterations using the option FBITERATIONS=50000 in the Mplus program. The highest PSR = 1.003, confirming convergence.

108

STRUCTURAL EQUATION MODELING

Table 2.16

Selected Mplus output: BCFA model.

MODEL FIT INFORMATION Number of Free Parameters

246

Bayesian Posterior Predictive Checking using Chi-Square 95% Confidence Interval for the Difference Between the Observed and the Replicated Chi-Square Values -64.583

38.931

Posterior Predictive P-Value

0.698

Prior Posterior Predictive P-Value

0.926

Information Criteria Deviance (DIC) Estimated Number of Parameters (pD) Bayesian (BIC)

10724.945 135.971 11808.163

… STANDARDIZED MODEL RESULTS STDY Standardization Posterior One-Tailed 95% C.I. Estimate S.D. P-Value Lower 2.5% Upper SOM BY Y1 0.695 0.101 0.000 Y2 0.604 0.107 0.000 Y3 0.694 0.100 0.000 Y4 0.723 0.098 0.000 Y5 0.550 0.108 0.000 Y6 0.716 0.099 0.000 Y7 -0.038 0.078 0.313 Y8 0.072 0.077 0.173 … Y17 0.004 0.086 0.482 Y18 -0.004 0.086 0.479 DEP Y7 Y8 Y9 Y10 Y11 Y12 Y1 Y2 … Y17 Y18

2.5%

Significance

0.490 0.387 0.498 0.526 0.327 0.518 -0.197 -0.086

0.890 0.808 0.889 0.910 0.752 0.910 0.110 0.220

-0.165 -0.177

0.168 0.156

* * * * * *

BY 0.857 0.658 0.954 0.670 0.488 0.358 -0.005 0.008

0.083 0.089 0.081 0.091 0.102 0.109 0.077 0.079

0.000 0.000 0.000 0.000 0.000 0.001 0.476 0.454

0.699 0.484 0.805 0.489 0.283 0.143 -0.160 -0.152

1.026 0.831 1.125 0.846 0.679 0.570 0.144 0.160

-0.007 0.077

0.082 0.085

0.467 0.179

-0.171 -0.092

0.153 0.240

* * * * * *

CONFIRMATORY FACTOR ANALYSIS (continued)

Table 2.16 ANX Y13 Y14 Y15 Y16 Y17 Y18 Y1 Y2 … Y11 Y12 …

BY 0.668 0.697 0.671 0.712 0.677 0.588 0.006 -0.026

0.113 0.113 0.110 0.110 0.113 0.116 0.088 0.087

0.000 0.000 0.000 0.000 0.000 0.000 0.474 0.382

0.446 0.473 0.446 0.487 0.447 0.358 -0.170 -0.200

0.889 0.915 0.884 0.916 0.892 0.817 0.177 0.142

0.056 0.007

0.086 0.085

0.258 0.471

-0.110 -0.162

0.224 0.169

TECHNICAL 8 OUTPUT Kolmogorov-Smirnov comparing posterior distributions across chains 1 and 2 using 100 draws. Parameter Parameter Parameter … Parameter Parameter …

68 20

KS Statistic P-value 0.1800 0.0691 0.1700 0.0994

217 98

0.0000 0.0000

1.0000 1.0000

TECHNICAL 8 OUTPUT FOR BAYES ESTIMATION CHAIN 1 2

109

BSEED 0 285380

Iterations for computing PPPP POTENTIAL ITERATION 100 200 … 9900 10000

PARAMETER WITH SCALE REDUCTION 1.228 1.126 1.004 1.003

HIGHEST PSR 108 234 7 7

Iterations for model estimation POTENTIAL ITERATION 100 200 … 9900 10000

PARAMETER WITH SCALE REDUCTION 1.227 1.222 1.003 1.003

HIGHEST PSR 108 7 189 189

* * * * * *

110

STRUCTURAL EQUATION MODELING

are several significant error correlations: rx1, x13 , rx2, x4 , rx5, x6 , rx5, x16 , rx7, x9 , rx10, x11 , rx11, x12 , rx12, x15 , and rx15, x16 (results are not reported in Table 2.16). In this section, we have demonstrated running BCFA in Mplus. By adding small-variance informative priors to the minor parameters (e.g. cross-factor loadings, residual covariances), we are able to relax the rigid framework of the CFA in which cross-loadings and residual covariances are fixed to 0. In application of CFA, the poor model fit is often attributed to potential cross-loadings and particularly error covariances. When encountering such a case, the BCFA, a more realistic measurement model, enables us to better fit the data.

2.10 Plausible values of latent variables An often-encountered challenge in application of SEM is too many variables involved in a model. As mentioned earlier, a structural equation model consists of two components: a measurement model and a structural model. Very often, there are many observed indicator variables in a measurement model, particularly when multiple latent variables/factors are included in the measurement model. To make the model parsimonious, researchers often reduce a structural equation model to a path analysis model in which latent variables/factors are replaced with scale total scores or traditional factor scores estimated from CFA. Instead of using scale total scores and factor scores, an alternative approach is to use plausible values of latent variables (Mislevy 1991, 1993; Mislevy et al. 1992; von Davier et al. 2009; Asparouhov and Muthén 2010c). Plausible values can be viewed as factor scores generated from multiple imputations (MIs). That is, Mplus treats all latent variables as “observed” variables that have missing values for all observations, and then imputes plausible values for each latent variable based the MCMC Bayesian estimation (Asparouhov and Muthén 2010c,d). MCMC generates a posterior distribution of factor score values for each individual. Random draws from the posterior distributions are referred to as plausible values. Drawing the plausible values repeatedly results in multiple sets of plausible values. The plausible value sets are slightly different, but all are representative of the posterior distribution. The imputed plausible values can be saved and used as “observed” variables that have advantages over the total scores (i.e. sum of item scores) or the traditional factor scores for further analysis. While using factor scores as dependent variables in secondary analysis gives biased slopes, using plausible values can alleviate the biases. In addition, using plausible values produces more accurate estimates of factor variances and factor correlations (Asparouhov and Muthén 2010d). Plausible variables can be estimated for both continuous and categorical latent variables (Asparouhov and Muthén 2010d). The following program imputes and saves plausible values for continuous latent variables. Mplus Program 2.20 TITLE: BSI-18: Imputing plausible values DATA: FILE =BSI_18.dat;

CONFIRMATORY FACTOR ANALYSIS

111

VARIABLE: NAMES = X1-X18 Gender Ethnic Age Edu Crack ID; MISSING= ALL (-9); USEVARIABLES = Y1-Y18; IDVARIABLE=ID; AUXILIARY=Gender Ethnic Age Edu Crack; DEFINE: Y1=X1; Y2=X4; Y3=X7; Y4=X10; Y5=X13; Y6=X16; Y7=X5; Y8=X2; Y9=X8; Y10=X11; Y11=X14; Y12=X17; Y13=X3; Y14=X6; Y15=X9; Y16=X12; Y17=X15; Y18=X18; STANDARDIZE Y1-Y18; ANALYSIS: ESTIMATOR=BAYES; PROCESSORS=2; !CHAINS=2; !default number of chains; !POINT=MEDIAN; !default point estimate; BITERATIONS=(10000);!refers to the minimum number of total !iterations; !FBITERATIONS=50000; !requests a fixed number of Bayes !iterations; !KOLMOGOROV = 1000; !default number of draws from chains for !KS test; !THIN=100; !keep every 100th sample/draw from each chain by !defaul; MODEL: SOM BY Y1* Y2-Y6 Y7-Y18(Sxload1-Sxload12);!minor loadings; DEP BY Y7* Y8-Y12 Y1-Y6 Y13-Y18(Dxload1-Dxload12);!minor loadings; ANX BY Y13* Y14-Y18 Y1-Y12(Axload1-Axload12);!minor loadings; SOM@1; DEP@1; ANX@1; Y1-Y18(RVar1-RVar18); !residual variances; Y1-Y18 with Y1-Y18(RCVar1-RCVar153); !residual covariances; MODEL PRIORS: Sxload1-Sxload12∼N(0,0.01); !Prior for cross-loading; Dxload1-Dxload12∼N(0,0.01); Axload1-Axload12∼N(0,0.01); !Prior for residual variance; RVar1∼IW(50.2,100); RVar2∼IW(64.6,100); RVar3∼IW(46.8,100); RVar4∼IW(50.5,100); RVar5∼IW(66.8,100); RVar6∼IW(43,100); RVar7∼IW(31.7,100); RVar8∼IW(42.5,100); RVar9∼IW(22.4,100); RVar10∼IW(45.7,100); RVar11∼IW(67.5,100);

112

STRUCTURAL EQUATION MODELING

RVar12∼IW(85.6,100); RVar13∼IW(50.9,100); RVar14∼IW(47.8,100); RVar15∼IW(53.6,100); RVar16∼IW(51.1,100); RVar17∼IW(53.8,100); RVar18∼IW(57.9,100); !Priors for residual covariance; RCVar1-RCVar153∼IW(0,100); DATA IMPUTATION: NDATASETS=5; SAVE=PVSimp*.dat; SAVEDATA: SAVE FSCORES(5); FILE=Pvalue.dat;

By adding the DATA IMPUTATION and SAVEDATA commands to Mplus Program 2.19, the program imputes multiple sets of plausible values of the latent variables and saves them in a different data file with ASCII format. The DATA IMPUTATION command is used to generate MI datasets when data contains missing values (Asparouhov and Muthén 2010d). As latent variables are treated as “observed” variables with “missing values” for all the observations, the DATA IMPUTATION command is used to impute the values for the latent variables. The imputed values of latent variables are called plausible values of the latent variables. The subcommand NDATASETS=5 in the DATA IMPUTATION command tells Mplus to impute five sets of plausible values saved by the subcommand SAVE. In our example, five sets of plausible values are saved with the name prefix PVSimp. The asterisk (*) in the subcommand SAVE=PVSimp*.dat will be replaced by the number of the imputed dataset. In addition, Mplus will generate a file named PVSimplist.dat that contains the names of the MI datasets. The FSCORES option in the SAVEDATA command tells Mplus to save the factor scores (here plausible scores). The subcommand FILE in the SAVEDATA command is to generate summary statistics of the latent variable values, including their means, medians, standard deviation, and 2.5 and 97.5 percentiles of the factor scores based on the multiple plausible data sets. To have precise estimates of such statistics, the number of sets of plausible values should be large (e.g. 100–500). The plausible values of the latent variables can be saved together with the observed indicator variables. By specifying the IDVARIABLE and AUXILIARY subcommands in the VARIALBE command, the individual identification variable and the listed variables will be saved in the plausible datasets. As such, the variables can be analyzed with the plausible values of the latent variables in the secondary analysis. Application of using plausible values is just like analysis of multiple imputation (MI) data sets using Rubin’s method (1987). We will discuss and demonstrate how to use multiple plausible datasets for SEM in the next chapter.

CONFIRMATORY FACTOR ANALYSIS

Appendix 2.A

113

BSI-18 instrument

The interviewer reads a list of problems and complaints that people sometimes have. The respondent answers each question with a descriptor that best describes how much discomfort that problem has caused the respondent during the past week, including the interview day. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Faintness or dizziness Feeling no interest in things Feeling nervous when you are left alone Pains in heart or chest Feeling lonely even when you are with people Feeling tense or keyed up Nausea or upset stomach Feeling blue Suddenly scared for no reason Trouble getting your breath Feeling worthlessness Spells of terror or panic Numbness or tingling in parts of your body Feeling hopeless about the future Feeling so restless you couldn’t sit still Feeling weak in parts of your body Thoughts of death or dying Feeling fearful

Responses are measured on a five-point Likert scale: 0 − Not at all, 1 − A little bit, 2 – Moderately, 3 − Quite a bit, 4 − Extremely.

114

STRUCTURAL EQUATION MODELING

Appendix 2.B Item reliability Reliability refers to the consistency or repeatability of a measurement. It is defined as the extent to which the variance of an observed variable is explained by the true score that the variable is designed to measure. In other words, the reliability is defined as the ratio of the true score variance to the total variance of the observed measure. Let’s define y = 𝜆𝜂 + 𝜀, where 𝜂 is the true score that y is to measure and 𝜀 is the measurement error. The reliability of variable y is defined as the ratio of the variance explained by the true score to the variance to the observed variance of y: 𝜌y =

𝜆2 Var(𝜂) 𝜆2 Var(𝜂) = 2 Var(y) 𝜆 Var(𝜂) + Var(𝜀)

(2.B.1)

where Var(𝜂) and Var(𝜀) are the variance of the true score and the variance of measurement error, respectively, and 𝜆 is the factor loading. Note that the reliability defined in Eq. (2.B.1) is in fact the squared correlation between the observed variable y and its true score: 2 = ry𝜂

[Cov(y, 𝜂)]2 Var(y)Var(𝜂)

=

{Cov[(𝜆𝜂 + 𝜀), 𝜂]}2 Var(y)Var(𝜂)

=

𝜆2 [Var(𝜂)]2 Var(y)Var(𝜂)

=

𝜆2 Var(𝜂) = 𝜌y Var(y)

(2.B.2)

where 𝜆 is the unstandardized factor loading of y on 𝜂. This also proves that the squared standardized factor loading (𝜆2 ) of an item estimated in a CFA model is the estimated reliability of the item. In test theory, there are three major types of observed variables: parallel, tau-equivalent, and congeneric measures. As an example, let’s define variable y measured at times 1 and 2 as y1 and y2 , respectively: y1 = 𝜆1 𝜂 + 𝜀1 y2 = 𝜆2 𝜂 + 𝜀2

(2.B.3)

where the measurement errors are assumed not to be correlated. • If 𝜆1 = 𝜆2 = 𝜆 and Var(𝜀1 ) = Var(𝜀2 ), then y1 and y2 are parallel measures. • If 𝜆1 = 𝜆2 = 𝜆 and Var(𝜀1 ) ≠ Var(𝜀2 ), then y1 and y2 are tau-equivalent measures, • If 𝜆1 ≠ 𝜆2 and Var(𝜀1 ) ≠ Var(𝜀2 ), then y1 and y2 are congeneric measures, i.e. the most general measure.

CONFIRMATORY FACTOR ANALYSIS

115

Assuming parallel measures, the correlation between y1 and y2 can be described as: ry1y2 = =

Cov(y1 , y2 ) Cov[(𝜆1 𝜂 + 𝜀1 ), (𝜆2 𝜂 + 𝜀2 )] = 1∕2 Var(y) [Var(y1 )Var(y2 )] 𝜆2 Var(𝜂) = 𝜌y Var(y)

(2.B.4)

where 𝜌y is the item reliability. Eq. (2.B.4) implies that the item reliability can be estimated using test-retest measures of the same variable x, assuming (i) the test and retest measures of y are parallel measures; (ii) participants’ responses at time 2 are not dependent on their previous responses; and (iii) there is no change in the true score between test and retest time. Assuming parallel or tau-equivalent measures, the reliability of y is simply the correlation between the test-retest measures of y. Although the assumption of parallel measures may not be realized in real research, the test-retest reliability is widely used as an approximate estimate of item reliability in social sciences studies. Note that in application of this method, appropriately choosing the time interval for test-retest interviews is important. The time interval should be short enough to prevent changes over time in the variables under test, and long enough to prevent a memory effect of responses in the first test.

116

STRUCTURAL EQUATION MODELING

Appendix 2.C

Cronbach’s alpha coefficient

Cronbach’s alpha coefficient (Cronbach 1951) is a very popular measure of scale reliability in social science studies. For a set of indicators, y1 , y2 , … , yp that measure ∑p the same latent variable 𝜂, the sum of the ys (i.e. j=1 yj ) is often used as a composite measure of the underlying latent variable. Cronbach’s alpha is the squared correlation ∑p between j=1 yj and 𝜂, which can be estimated as (Bollen 1989, p. 216): ( 𝛼=

p ⎞ ⎛ ∑ Var(yj ) ⎟ )⎜ j=1 ⎟ ⎜ p 1− )⎟ ( p − 1 ⎜⎜ p ∑ ⎟ Var yj ⎟ ⎜ j=1 ⎠ ⎝

(2.C.1)

where p is the number of indicators. This estimate of reliability is not for a single indicator, but for a composite measure (i.e. unweighted sum of a set of indicators) under the assumption that the measures are either parallel or tau-equivalent. It would underestimate reliability for congeneric measures.

CONFIRMATORY FACTOR ANALYSIS

117

Appendix 2.D Calculating probabilities using probit regression coefficients Let’s define U as an observed binary indicator, y* as the unobserved continuous response variable; 𝜂 as the latent construct variable or factor underlying y* ; 𝜆 is the factor loading, and 𝜀 is the measurement error: y* = 𝜆𝜂 + 𝜀

(2.D.1)

U = 0, if y* ≤ 𝜏; U = 1, otherwise

(2.D.2)

where 𝜏 is a threshold, the probability of having U = 1 is: P(U = 1 ∣ 𝜂) = P(y* > 𝜏) = P[(𝜆𝜂 + 𝜀) > 𝜏] = P[𝜀 < (−𝜏 + 𝜆𝜂)] = F(−𝜏 + 𝜆𝜂)

(2.D.3)

Either the probit function or logistic function can be the appropriate link function to relate the observed variable y to factor 𝜂. Mplus uses a logit link for ML estimators and a probit link for WLS estimators. The default estimator for modeling categorical outcomes is WLSMV; thus the default link function is probit. In Mplus, threshold parameters are estimated instead of intercepts, and the intercept is represented in the threshold (i.e. 𝛼 = − 𝜏). Using Eq. (2.D.3), the unstandardized probit regression estimates can be used to calculated the probability of U = 1. As F(−𝜏 + 𝜆𝜂) is the CDF function, P(U = 1 ∣ 𝜂) can be found in the Z distribution table or readily calculated using statistical packages such as SAS. For ordered categorical outcome measures with more than two categories, the probability of being in the categories from 0 to M can be calculated using probit coefficients (Muthén and Muthén 1998–2017): P(U = 0 ∣ 𝜂) = F(𝜏1 − 𝜆𝜂)

(2.D.4)

P(U = 1 ∣ 𝜂) = F(𝜏2 − 𝜆𝜂) − F(𝜏1 − 𝜆𝜂) …

(2.D.5) P(U = M ∣ 𝜂) = F(−𝜏M + 𝜆𝜂)

(2.D.6)

3

Structural equation models 3.1 Introduction In Chapter 2, we discussed and demonstrated confirmatory factor analysis (CFA) models. Once the factorial structure of the underlying constructs is validated using CFA, the measurement model is ready to be used for further studies of relationships involving latent variables/factors. Covariates can be included in the CFA model to study relationships between latent variables and observed covariates. A CFA model with covariates is also called a multiple indicators, multiple causes (MIMIC) model and can be used to study not only the relationships between factors and covariates, but also measurement invariance. When any covariance/correlation between latent variables/factors (represented by a curved line with an arrow in both directions in the model diagram) is replaced with a causal effect (represented by a line with an arrow in one direction in the model diagram), the model becomes a structural equation model (SEM), in which a specific latent variable/factor can be specified to predict other latent variables/factors or is influenced by other latent variables/factors. In addition, observed exogenous variables or covariates can be included to predict latent variables/factors, and the latter can also be used to predict observed endogenous dependent variables in the structural equation model. We begin this chapter with the multiple indicators, multiple causes (MIMIC) model – a special case of the structural equation model – in Section 3.2. Testing measurement invariance by examining differential item functioning (DIF) in the MIMIC model is discussed. Section 3.3 presents an example of a general structural equation model. Section 3.4 addresses correcting for measurement error in a single indicator variable in such a model. In the next section, testing interactions involving latent variables is described. The last three sections of the chapter introduce some newly developed structural equation model. Section 3.6 is about a moderated mediation model where application of the bootstrap method and estimating counterfactual-based causal Structural Equation Modeling: Applications Using Mplus, Second Edition. Jichuan Wang and Xiaoqian Wang. © 2020 John Wiley & Sons Ltd. Published 2020 by John Wiley & Sons Ltd.

120

STRUCTURAL EQUATION MODELING

effects in Mplus are discussed. Application of plausible values of latent variables in structural equation models and Bayesian structural equation modeling (BSEM) are discussed and demonstrated in Sections 3.7 and 3.8, respectively.

3.2 Multiple indicators, multiple causes (MIMIC) model A MIMIC model is a special case of a structural equation model in which there are multiple indicators reflecting the underlying latent variables/factors, and multiple causes (predictors) affecting latent variables/factors. When the covariance structures (COVSs) are analyzed, the MIMIC model is described as: η = ΓX + ζ Y = λy η + ε X≡ξ

(3.1)

where multiple endogenous indicators (y’s) are used to measure the endogenous latent variables (η’s). No causal effects, but the covariance/correlations, are specified among the η’s; and η’s are affected by exogenous indicators (x’s), which are assumed to be perfect measures of the exogenous latent variables (𝜉’s). (For example, respondent self-reported gender status is often treated as a measure of their sex identity without measurement error.) The symbol ≡ specifies an identity between x and 𝜉 by fixing factor loadings to 1.0 (i.e. Λx = 1) and measurement errors to 0 (i.e. Θ𝛿 = 0). When the mean and covariance structure (MACS) is analyzed, the MIMIC model is described as: η = ΓX + ζ Y = υy + λy η + ε X≡ξ

(3.2)

where 𝜐y (nu) is the vector of means/intercepts of the y endogenous indicators. Note that there are no factor intercepts in the equation because factor means/intercepts in a single group model must be fixed to 0 for the purpose of model identification. Factor mean differences between groups can be examined in multigroup modeling, as we will discuss in Chapter 5. In this section, we illustrate the MIMIC model using the same BSI-18 dataset that was partially used for models in Chapter 2. The MIMIC model specified in Figure 3.1 consists of two parts: (i) the measurement model, in which 18 observed indicators/items measure three underlying latent variables/factors (i.e. SOM – somatization [η1 ], DEP – depression [η2 ], and ANX – anxiety [η3 ]), as discussed in Chapter 2; and (ii) structural equations, in which observed x variables, such as gender (Gender: 1 – male; 0 – female), ethnicity (Ethnic: 1 – white; 0 – non-white), age (Age), and education (Edu: 1 – no formal education; 2 – less than high school education;

STRUCTURAL EQUATION MODELS

η1

Gender

η2

Ethnic

Age

Edu

η3

Figure 3.1

121

y1

ε1

y4

ε4

y7

ε7

y10

ε10

y13

ε13

y16

ε16

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

y3

ε3

y6

ε6

y9

ε9

y12

ε12

y15

ε15

y18

ε18

MIMIC model.

3 – some high school education; 4 – high school graduate; 5 – some college; and 6 – college graduate) predict the three latent variables/factors. The measurement part of the MIMIC model can be described as: 0 0 ⎤ ⎡ y1 ⎤ ⎡ 𝜐1 ⎤ ⎡ 1 ⎡ ε1 ⎤ ⎢ y ⎥ ⎢ 𝜐 ⎥ ⎢ 𝜆y41 0 0 ⎥ ⎢ε ⎥ 4 4 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ 4⎥ … … … ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ …⎥ ⎢y16 ⎥ ⎢𝜐16 ⎥ ⎢ 𝜆y161 0 0 ⎥ ⎢ε16 ⎥ ⎢y ⎥ ⎢𝜐 ⎥ ⎢ 0 1 0 ⎥ ⎢ε ⎥ 5 5 ⎥ ⎡η1 ⎤ ⎢ 5 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ y2 ⎥ = ⎢ 𝜐2 ⎥ + ⎢ 0 𝜆y22 0 ⎥ ⎢η2 ⎥ + ⎢ ε2 ⎥ … ⎥ ⎢ ⎥ ⎢ …⎥ ⎢ …⎥ ⎢ … ⎥ ⎢ ⎢y17 ⎥ ⎢𝜐17 ⎥ ⎢ 0 𝜆y172 0 ⎥ ⎣η3 ⎦ ⎢ε17 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ y3 ⎥ ⎢ 𝜐3 ⎥ ⎢ 0 0 1 ⎥ ⎢ ε3 ⎥ ⎢ y6 ⎥ ⎢ 𝜐6 ⎥ ⎢ 0 0 𝜆y63 ⎥ ⎢ ε6 ⎥ ⎢ …⎥ ⎢ … ⎥ ⎢ ⎢ …⎥ … ⎥ ⎥ ⎢y ⎥ ⎢𝜐 ⎥ ⎢ ⎢ε ⎥ ⎣ 18 ⎦ ⎣ 18 ⎦ ⎣0 0 𝜆y183 ⎦ ⎣ 18 ⎦

(3.3)

122

STRUCTURAL EQUATION MODELING

This is equivalent to y1 = 𝜐1 + η1 + ε1 y4 = 𝜐4 + 𝜆y41 η1 + ε4 … y16 = 𝜐16 + 𝜆y161 η1 + ε16 y5 = 𝜐5 + η2 + ε5

y2 = 𝜐2 + 𝜆y22 η2 + ε5 … y17 = 𝜐17 + 𝜆y172 η2 + ε17

y3 = 𝜐3 + η3 + ε3

y6 = 𝜐6 + 𝜆y63 η3 + ε6

… y18 = 𝜐18 + 𝜆y183 η3 + ε18

(3.4)

The measurement part of the MIMIC model has already been tested in the CFA models in Chapter 2. Another component of the MIMIC model is the structural equations that examine the causal relationships between the predictors and the latent variables. The structural equation part of the MIMIC model can be expressed in matrix notation:1 Gender⎤ ⎡η1 ⎤ ⎡γ11 γ12 γ13 γ14 ⎤ ⎡ ⎡ζ1 ⎤ ⎢ ⎢η2 ⎥ = ⎢ γ21 γ22 γ23 γ24 ⎥ ⎢ Ethnic ⎥⎥ + ⎢ζ2 ⎥ (3.5) ⎢ ⎥ ⎢ ⎥ Age ⎢ ⎥ ⎣η3 ⎦ ⎣ γ31 γ32 γ33 γ34 ⎦ ⎢⎣ Edu ⎥⎦ ⎣ζ3 ⎦ This is equivalent to η1 = γ11 Gender + γ12 Ethnic + γ13 Age + γ14 Edu + ζ1 η2 = γ21 Gender + γ22 Ethnic + γ23 Age + γ24 Edu + ζ2 η3 = γ31 Gender + γ32 Ethnic + γ33 Age + γ34 Edu + ζ3

(3.6)

where the three multiple regression equations look like the simultaneous equations models in econometrics, in which multiple dependent variables are functions of a set of explanatory variables or predictors and the residual terms (i.e. 𝜁 1 , 𝜁 2 , 𝜁 3 ) of the equations are correlated with each other. Different from the traditional simultaneous equation models, the dependent variables in the MIMIC models are unobserved latent variables. This approach is clearly better than the traditional simultaneous equation models or multivariate analysis of variance (MANOVA) that assumes no measurement errors in variables. The following Mplus program runs the MIMIC model. Mplus Program 3.1 TITLE: BSI-18: MIMIC Model DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack ID; MISSING = ALL (-9); 1 As SEM used to be called LISREL modeling, (Jöreskog and Van Thillo 1972), the popular LISREL notations are used in the equations to specify the slope coefficients (𝛾’s) of regressing latent variables on the exogenous covariates. However, those regression coefficients (𝛾’s) are all specified in the BETA matrix in the Mplus TECH1 output.

STRUCTURAL EQUATION MODELS

123

USEVARIABLES = Y1-Y18 Gender Ethnic Age Edu; ANALYSIS: ESTIMATOR = MLR; MODEL: SOM BY Y1 Y4 Y7 Y10 Y13 Y16; !Somatization; DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; SOM on gender ethnic age edu; DEP on gender ethnic age edu; ANX on gender ethnic age edu; OUTPUT: SAMPSTAT STDYX TECH4;

Data are read from data set BSI_18.dat. The observed variables y1 –y18 are indicators of three latent variables (η1 – SOM; η2 – DEP; and η3 – ANX) and four covariates (Gender, Ethnic, Age, and Edu) are used to predict the latent variables. Considering the possible multivariate non-normality in the measures, the robust maximum likelihood (MLR) estimator is used for model estimation. Selected model results are shown in Table 3.1. The model fits the data very well: RMSEA = 0.055 (90% CI: 0.045, 0.064), close-fit test P = 0.207, CFI = 0.918, Table 3.1

Selected Mplus output: MIMIC model.

MODEL FIT INFORMATION … RMSEA (Root Mean Square Error Of Approximation) Estimate 90 Percent C.I. Probability RMSEA 4 then Hsch=1; ANALYSIS: ESTIMATOR = MLR; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17;!Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18;!Anxiety; Y5 with Y8; DEP on ANX Gender Ethnic Age Hsch; ANX on Gender Ethnic Age Hsch; OUTPUT: SAMPSTAT STDYX;

Selected model results are shown in Table 3.5. The model fits the data very well: RMSEA = 0.050 (90% CI: 0.035, 0.064), a close fit test is not statistically significant (P = 0.483), CFI = 0.951, TLI = 0.940, and SRMR = 0.044. The latent variable anxiety (ANX) has a significant positive effect (0.910, P < 0.001) on depression

140

STRUCTURAL EQUATION MODELING

Table 3.5 Selected Mplus output: structural equation model. MODEL FIT INFORMATION … RMSEA (Root Mean Square Error Of Approximation) Estimate 90 Percent C.I. Probability RMSEA 4 then Hsch=1; ANALYSIS: ESTIMATOR = MLR; !BOOTSTRAP=10000; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; Y5 with Y8; DEP on ANX Gender Ethnic Age Hsch; ANX on Gender Ethnic Age Hsch; MODEL INDIRECT: DEP via ANX Gender; DEP via ANX Ethnic; !OUTPUT: CINTERVAL(BOOTSTRAP);

The MODEL INDIRECT command is used to request estimates of indirect effects and their standard errors (SEs). The VIA option of the MODEL INDIRECT command is used to test the indirect effects of covariates Gender and Ethnic on DEP via ANX. That is, the effects of Gender and Ethnic on DEP are mediated by ANX. Note that with the MODEL INDIRECT command or the MODEL CONSTRAINT command, by default, Mplus computes the SEs of indirect effects using the multivariate delta method (Sobel 1982). As the sampling distribution of the indirect effect computed as a product of coefficients often does not approach normality, the Sobel test may produce inaccurate results unless the sample size is sufficiently large (MacKinnon et al. 2002). Using the BOOTSTRAP statement in ANALYSIS command together with the CINTERVAL(BOOTSTRAP) option in the OUTPUT commands, Mplus will

142

STRUCTURAL EQUATION MODELING

use bootstrap approach for significance testing for parameter estimates (Bollen and Stine 1992; Shrout and Bolger 2002). We will further demonstrate this application in Section 3.7. Table 3.6 shows that Gender has a negative (−0.465, P < 0.001) indirect effect on DEP via ANX, while Ethnic has a positive (0.441, P = 0.007) indirect effect. In a more sophisticated model, an exogenous (either observed or latent) variable or endogenous (either observed or latent) variable may indirectly affect other endogenous (either observed or latent) variables. Its specific indirect and total effects on an endogenous variable can be tested. In Figure 3.4, an additional endogenous variable (Crack, measuring crack-cocaine frequency of use in the past 30 days) is included in the model. This variable is an observed endogenous variable, assuming no measurement error at the moment. Again, the relationship between substance abuse and mental health is complicated. Given the links between substance abuse and mental problems, the connections may arise from several different mechanisms. Substance abuse may potentially trigger or relate to the development of psychiatric symptoms. Conversely, underlying psychopathology may contribute to the abuse of psychoactive substances. In addition, some factors may increase individuals’ vulnerability to both substance abuse and mental illness. For the purpose of model demonstration, we assume crack-cocaine use causes mental problems, and there is no riprocal effect in our example model. Covariates (e.g. Gender, Ethnic, Age, and Hsch), which are observed exogenous variables, are hypothesized to directly influence, as well as indirectly influence through the endogenous observed variable crack-cocaine frequency (η1 – Crack), on the endogenous latent variables anxiety (η3 – ANX) and depression (η4 – DEP). We will test the direct, indirect, and total effect of Gender and Ethnic on DEP in the following Mplus program. Mplus Program 3.7 TITLE: Testing Direct, Specific Indirect, Total Indirect, and Total Effects DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES = Y5 Y2 Y8 Y11 Y14 Y17 Y3 Y6 Y9 Y12 Y15 Y18 Gender Ethnic Age Crack Hsch; DEFINE: Hsch=0; if edu>4 then Hsch=1; ANALYSIS: ESTIMATOR = MLR; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; Y5 with Y8; DEP ANX on Crack Gender Ethnic Age Hsch; DEP on ANX; Crack on Gender Ethnic Age Hsch; MODEL INDIRECT: DEP IND Gender; DEP IND Ethnic; OUTPUT: SAMPSTAT STDYX;

STRUCTURAL EQUATION MODELS Table 3.6

143

Selected Mplus output: testing indirect effects.

TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS

Estimate

S.E.

Est./S.E.

Two-Tailed P-Value

Effects from GENDER to DEP via ANX Sum of indirect

-0.465

0.128

-3.633

0.000

Specific indirect DEP ANX GENDER

-0.465

0.128

-3.633

0.000

Effects from ETHNIC to DEP via ANX Sum of indirect

0.441

0.163

2.709

0.007

Specific indirect DEP ANX ETHNIC

0.441

0.163

2.709

0.007

Crack (η1)

η2

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

y3

ε3

y6

ε6

y9

ε9

y12

ε12

y15

ε15

y18

ε18

Gender Ethnic Age η3

Hsch

Figure 3.4

Testing indirect and total effects in SEM.

144

STRUCTURAL EQUATION MODELING

The MODEL INDIRECT command is used to test the specific indirect effects, total indirect effects, and total effects. The variable on the left side of the keyword IND is the dependent variable, and the variable on the right side is the independent variable. If a mediating variable is specified between IND and the independent variable, the program will produce only specific indirect and total indirect effects of the independent variable via the mediating variable. In this example, no mediating variables are specified, and the DEP IND Gender statement requests estimates of the direct effects, specific indirect effects, total specific effects, and total effect of Gender on DEP (Table 3.7). The direct effect of Gender on DEP is not statistically significant (0.094, P = 0.361), but its total indirect effect is negative and statistically significant (−0.460, p < 0.001) and consists of three specific indirect effects: specific indirect effect via (i) Crack (0.011, P = 0.390); (ii) ANX (−0.439, P = 0.001); and (iii) Crack, and then (iv) ANX (−0.032, P = 0.192). The total effect of Gender on DEP, which is the sum of all its direct and indirect effects, is statistically significant (−0.366, P = 0.009). The effects of Ethnic on DEP are interpreted in the same way. The indirect effect can also be tested using the MODEL CONSTRAINT command in Mplus. We will demonstrate how to do so later, in Mplus Program 3.11.

3.4 Correcting for measurement error in single indicator variables In Chapters 1 and 2, we discussed that the measurement model is designed to handle measurement errors in observed variables, and at least three indicators per factor are needed in a CFA model. But very often, observed variables are treated as either independent or dependent variables in a model, assuming no measurement errors. In order to understand how failing to account for measurement errors in a single indicator can result in attenuated parameter estimates, the influence of measurement errors is briefly reviewed in Appendix 3.A. When measurement errors are ignored in a regression, biased parameter estimates and SEs can occur (Hayduk 1987). A variety of statistical methods (both parametric and non-parametric) can be used to correct for measurement errors and to make adjustments for the relations between the flawed variables and others (Allison and Hauser 1991; Armstrong et al. 1989; Greenland and Kleinbaum 1983; Marshall and Graham 1984; Rosner et al. 1990; Thomas et al. 1993). When multiple indicators per latent variable are available, SEM is a powerful approach to mitigate the problems of measurement errors in understanding the relationships among variables in the model. In the case where a single indicator variable is included in a model to predict endogenous variable(s), an appropriate way to adjust for the influence of measurement error is to employ external measurement reliability for this variable.4 Once the reliability of a variable is known or approximated, its error variance can be treated as a fixed parameter in the model so that its measurement error will be controlled in modeling (Bollen 1989; Hayduk 1987; Jöreskog and Sörbom 1989; Munck 1991; Wang et al. 1995). 4 Item reliabilities can be estimated using classical reliability measures, such as test-retest or split-half reliability, or from multiple wave panel data (Heise 1969; Heise and Bohrnstedt 1970; Palmquist and Green 1992; Wang et al. 1995; Werts and Jöreskog 1971; Wiley and Wiley 1970).

STRUCTURAL EQUATION MODELS Table 3.7 effects.

145

Selected Mplus output: testing direct, specific indirect, total indirect, and total

TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS

Estimate

Two-Tailed P-Value

S.E.

Est./S.E.

-0.366 -0.460

0.141 0.128

-2.594 -3.583

0.009 0.000

DEP CRACK GENDER

0.011

0.013

0.860

0.390

DEP ANX GENDER

-0.439

0.128

-3.444

0.001

DEP ANX CRACK GENDER

-0.032

0.024

-1.305

0.192

Direct DEP GENDER

0.094

0.103

0.914

0.361

0.588 0.458

0.181 0.164

3.245 2.798

0.001 0.005

DEP CRACK ETHNIC

0.012

0.015

0.788

0.431

DEP ANX ETHNIC

0.480

0.160

2.997

0.003

DEP ANX CRACK ETHNIC

-0.033

0.031

-1.058

0.290

Direct DEP ETHNIC

0.130

0.116

1.122

0.262

Effects from GENDER to DEP Total Total indirect Specific indirect

Effects from ETHNIC to DEP Total Total indirect Specific indirect

(continued)

146

STRUCTURAL EQUATION MODELING

Table 3.7 (continued) STANDARDIZED TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS STDYX Standardization

Estimate

Two-Tailed P-Value

S.E.

Est./S.E.

-0.180 -0.227

0.068 0.060

-2.651 -3.767

0.008 0.000

DEP CRACK GENDER

0.005

0.006

0.864

0.387

DEP ANX GENDER

-0.217

0.060

-3.628

0.000

DEP ANX CRACK GENDER

-0.016

0.012

-1.301

0.193

Direct DEP GENDER

0.046

0.051

0.919

0.358

0.207 0.161

0.061 0.056

3.385 2.892

0.001 0.004

DEP CRACK ETHNIC

0.004

0.005

0.789

0.430

DEP ANX ETHNIC

0.169

0.054

3.096

0.002

DEP ANX CRACK ETHNIC

-0.012

0.011

-1.053

0.292

Direct DEP ETHNIC

0.046

0.041

1.126

0.260

Effects from GENDER to DEP Total Total indirect Specific indirect

Effects from ETHNIC to DEP Total Total indirect Specific indirect

STRUCTURAL EQUATION MODELS

147

When a latent variable η has only one observed indicator y, the simple measurement model is: (3.12) y = λy η + ε Then, the error variance 𝜃 ε , which is the variance of y unexplained by the latent variable η, can be described as: θε = Var(y) − λ2y Var(η) = Var(y) − Var(y)ρy = Var(y)(1 − ρy )

(3.13)

where Var(y) is the variance of the observed indicator variable y, and 𝜌y is the reliability of y (see Appendix 2.B). To control for measurement error of a single indicator, the error variance 𝜃 ε = Var(y)(1 − 𝜌y ) is calculated and specified in the model, while the factor loading 𝜆y is fixed to 1.0. In the model shown in Figure 3.5, crack use frequency is treated as a single indicator variable (Crack), its variance estimated from the sample is Var(Crack) = 85.65, and its reliability estimated from a test-retest is 𝜌Crack = 0.72. With this information, its error variance can be estimated using Eq. 3.13: (3.14) 𝜃ε = 85.65(1–0.72) = 23.98 The error variance 𝜃 ε then is specified in the following Mplus program to correct for measurement error in the single observed variable Crack. ε (θ1 = 23.98) Crack (observed) 1.0 Crack (η1)

η2

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

y3

ε3

y6

ε6

y9

ε9

y12

ε12

y15

ε15

y18

ε18

Gender Ethnic Age η3

Hsch

Figure 3.5

Correcting for measurement error in a single indicator.

148

STRUCTURAL EQUATION MODELING

Mplus Program 3.8 TITLE: Correcting for Measurement Error in Single Indicator DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES = Y5 Y2 Y8 Y11 Y14 Y17 Y3 Y6 Y9 Y12 Y15 Y18 Gender Ethnic Age Crack Hsch; DEFINE: Hsch=0; if edu>4 then Hsch=1; ANALYSIS: ESTIMATOR = MLR; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; Y5 with Y8; [email protected]; Eta1 BY CRACK@1; DEP on ANX; DEP ANX on Eta1 Gender Ethnic Age Hsch; Eta1 on Gender Ethnic Age Hsch; OUTPUT: SAMPSTAT STDYX; Table 3.8 Comparisons of model results with and without correcting for measurement error in a single indicator. Ignore measurement error DEP

ON ANX

0.921

0.111

8.279

0.000

ON CRACK GENDER ETHNIC AGE HSCH

-0.005 0.094 0.130 0.005 -0.112

0.005 0.103 0.116 0.006 0.155

-1.019 0.914 1.122 0.768 -0.721

0.308 0.361 0.262 0.443 0.471

ON CRACK GENDER ETHNIC AGE HSCH

0.017 -0.477 0.521 0.012 -0.139

0.006 0.129 0.173 0.007 0.226

2.705 -3.689 3.008 1.625 -0.616

0.007 0.000 0.003 0.104 0.538

CRACK ON GENDER ETHNIC AGE HSCH

-2.061 -2.165 0.076 0.483

1.316 1.849 0.071 2.398

-1.566 -1.171 1.072 0.202

0.117 0.242 0.284 0.840

DEP

ANX

STRUCTURAL EQUATION MODELS Table 3.8

149

(continued)

Correcting for measurement error DEP

ON ANX ETA1

0.926 -0.008

0.113 0.007

8.198 -1.020

0.000 0.308

0.023

0.009

2.676

0.007

DEP ON GENDER ETHNIC AGE HSCH

0.092 0.123 0.005 -0.110

0.103 0.118 0.006 0.155

0.894 1.042 0.788 -0.713

0.371 0.297 0.431 0.476

ANX ON GENDER ETHNIC AGE HSCH

-0.463 0.536 0.011 -0.142

0.130 0.173 0.007 0.224

-3.569 3.096 1.550 -0.635

0.000 0.002 0.121 0.526

ETA1 ON GENDER ETHNIC AGE HSCH

-2.061 -2.165 0.076 0.483

1.316 1.849 0.071 2.398

-1.566 -1.171 1.072 0.202

0.117 0.242 0.284 0.840

ANX ETA1

ON

Eta1 (η1 ) is the “true” measure of crack cocaine use frequency. The error variance of the observed variable Crack is fixed by the statement [email protected] and its factor loading is fixed to 1. in the MODEL command of the Mplus program. Table 3.8 provides comparisons of model results with and without correcting for measurement error in the variable Crack (the observed measure of crack cocaine use frequency). Without taking into account the measurement error, the variable Crack has no significant effect (−0.005, P = 0.308) on depression (DEP), but a significant positive effect (0.017, P = 0.007) on anxiety (ANX) (see the upper panel of Table 3.8). After correcting for measurement errors in the variable Crack, the corresponding coefficients became −0.008 (P = 0.308) and 0.023 (P = 0.007), respectively. The results show that the effects of crack cocaine use frequency on depression and anxiety were attenuated if its measurement error was not handled in modeling. When the reliability of the single indicator was smaller, the attenuation effects were even larger. The effects of the covariates (i.e. Gender, Ethnic, Age, and Hsch) on Crack remain unchanged with and without correcting for measurement error in the equation regressing Crack on the covariates. This is because Crack is the dependent variable in this equation, and its measurement error was absorbed into the residual term of the equation.

150

STRUCTURAL EQUATION MODELING

The same approach can also be used to make adjustments for measurement error in the scale composite score that is a proxy for a latent construct. A composite score has a higher reliability than any of its indicators. The estimate of the composite score’s Cronbach alpha can be used as the scale reliability index (Cohen et al. 1990).

3.5 Testing interactions involving latent variables In Mplus Program 3.2, we demonstrated how to test interactions between two observed variables in a MIMIC model by creating a new variable, which is the product of the two observed variables. In this section, we will discuss and demonstrate how to test interactions that involve latent variables.5 Testing interactions involving latent variables has been a challenge. Fortunately, Mplus has made this mission possible and easy. Here, we will limit our discussion to interactions that involve only continuous latent variables. Some applications of testing interactions involving categorical latent variables (latent class variables) are available in Appendix 6.A, in which interaction between a covariate and the baseline latent class variable in a latent transition analysis (LTA) model is discussed. In the model shown in Figure 3.6, the interaction involves the latent variable ANX (η3 ) and the observed endogenous variable Crack (η1 ). The interaction between

Crack (η1)

η2

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

y3

ε3

y6

ε6

y9

ε9

y12

ε12

y15

ε15

y18

ε18

Gender Ethnic Age η3

Hsch

Figure 3.6

Testing interactions involving latent variable.

5 Alternatively, interactions in SEM can be tested using multigroup modeling, in which the same model is specified and estimated simultaneously in each of the groups (e.g. treatment vs. control groups). This approach allows us to capture all the interactions between groups and independent variables, including latent variables. This topic will be discussed in Chapter 5.

STRUCTURAL EQUATION MODELS

151

η3 and η1 is shown in the figure as a filled circle. The Mplus program for the model follows. Mplus Program 3.9 TITLE: Testing Interactions involving continuous latent variables; DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES = Y5 Y2 Y8 Y11 Y14 Y17 Y3 Y6 Y9 Y12 Y15 Y18 Gender Ethnic Crack Age Hsch; DEFINE: Hsch=0; if edu>4 then Hsch=1; CENTER Age (GRANDMEAN); !centering age; ANALYSIS: ESTIMATOR = MLR; TYPE = RANDOM; ALGORITHM = INTEGRATION; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; Y5 with Y8; ANXxCRACK | ANX XWITH CRACK; DEP on ANX CRACK ANXxCRACK Gender Ethnic Hsch Age; ANX on Gender Ethnic Hsch Age; CRACK on Gender Ethnic Hsch Age; OUTPUT: TECH1 TECH8;

The XWITH option (standing for multiplied with) in the MODEL command is used to define the interactions involving latent variables. The symbol | is used with the option names to defines the interaction variable. In this example, the interaction term is named ANXxCrack and is put on the left side of the symbol |. The two variables (ANX and Crack) used to define the interaction variable are put on the right side of the |. With interactions involving latent variables, the ALGORITHM=INTEGRATION and TYPE=RANDOM options must be included in the ANALYSIS command. The maximum likelihood estimator using a numerical integration algorithm provides robust SE estimates. In the OUTPUT command, the TECH8 option provides the optimization history of model estimation in the Mplus output, as well as showing it on screen during computation. Selected model results are shown in Table 3.9. The interaction effect (0.001, P = 0.866) between anxiety and crack cocaine use frequency is not statistically significantly. Therefore, the effect of anxiety on depression (0.917, P < 0.001) does not depend on the value of crack use frequency. When numerical integration is carried out in the computations, Mplus does not provide the regular model fit indexes, except for the log-likelihood value and information criteria (e.g. Akaike’s information criterion [AIC], Bayesian information criterion [BIC], and adjusted BIC).

152

STRUCTURAL EQUATION MODELING

Table 3.9 Selected Mplus output: testing interactions between observed variables and latent variables. MODEL FIT INFORMATION Number of Free Parameters

54

Loglikelihood H0 Value H0 Scaling Correction Factor for MLR

-4851.836 1.330

Information Criteria Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC (n* = (n + 2) / 24)

9811.672 10001.397 9830.216

MODEL RESULTS … DEP

ON ANX ANXXCRACK

DEP

0.913 0.001

0.127 0.006

7.172 0.170

0.000 0.865

-0.003 0.099 0.143 -0.114 0.005

0.005 0.103 0.116 0.156 0.006

-0.586 0.960 1.229 -0.727 0.750

0.558 0.337 0.219 0.467 0.453

-0.513 0.468 -0.131 0.013

0.132 0.177 0.233 0.007

-3.890 2.636 -0.565 1.750

0.000 0.008 0.572 0.080

-2.061 -2.165 0.483 0.076

1.316 1.849 2.398 0.071

-1.566 -1.171 0.202 1.072

0.117 0.242 0.840 0.284

ON CRACK GENDER ETHNIC HSCH AGE

ANX

ON GENDER ETHNIC HSCH AGE

CRACK GENDER ETHNIC HSCH AGE

ON

STRUCTURAL EQUATION MODELS

153

The previous example model demonstrates how to estimate the interaction between an observed variable and a latent variable. The same approach can be used to estimate the interaction between two latent variables. In the following Mplus program, we create a latent variable of crack use frequency (Eta1) by specifying an appropriate error variance of the observed crack use frequency (Crack). Thus, the interaction effect between two latent variables is estimated. The model results (not reported here) are similar to those shown in Table 3.9. Mplus Program 3.10 TITLE: Testing Interactions between latent variables; DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES = Y5 Y2 Y8 Y11 Y14 Y17 Y3 Y6 Y9 Y12 Y15 Y18 Gender Ethnic Crack Age Hsch; DEFINE: Hsch=0; if edu>4 then Hsch=1; CENTER Age (GRANDMEAN); !centering age; ANALYSIS: ESTIMATOR = MLR; TYPE = RANDOM; ALGORITHM = INTEGRATION; MODEL: DEP BY Y5 Y2 Y8 Y11 Y14 Y17; !Depression; ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; Y5 with Y8; Eta1 BY CRACK@1; [email protected]; ANXxEta1 | ANX XWITH Eta1; DEP on ANX Eta1 ANXxEta1 Gender Ethnic Hsch Age; ANX on Gender Ethnic Hsch Age; Eta1 on Gender Ethnic Hsch Age; OUTPUT: SAMPSTAT TECH1;

3.6 Moderated mediating effect models Suppose a model involves an outcome variable Y, an independent or exposure variable X, a mediating variable M, and a covariate Z. When an interaction between X and Z, M and Z, or M and X is included in the model, the model is considered a moderated mediation model (James and Brett 1984; Baron and Kenny 1986). There are different types of moderated mediation models (Langfred 2004; Muller et al. 2005; Preacher et al. 2007; Muthén et al. 2016). Langfred (2004) reviewed two primary forms of moderated mediation and a statistical approach for each such model: Type 1 and Type 2 moderated mediation. In the former, moderating operates on the relationship between the independent variable (X) and the mediating variable (M); in the latter, moderating operates on the relationship between the mediating variable

154

STRUCTURAL EQUATION MODELING

(M) and the outcome variable (Y). Three special cases of moderated mediation are discussed and demonstrated by Muthén et al. (2016): • Case 1 (XZ). Regression of Y on X, M on X, both moderated by Z • Case 2 (MZ). Regression of Y on M, moderated by Z • Case 3 (MX). Regression of Y on M, moderated by X In this section, we focus on demonstrating Case 1 (XZ) moderated mediation using our BSI-18 data. This moderated mediation model helps us understand whether the direct and indirect effect of X on Y vary based on the value of the covariate Z. In other words, we will examine whether the direct effect of X, as well as its indirect effect via M, on Y are moderated by covariate Z. The model is described in Figure 3.7, where the outcome variable (Y) is a latent variable Anxiety (η2 ).6 The model is also expressed in the following structural equations: η2 = β0 + β1 M + β2 X + β3 Z + β4 XZ + ε1

(3.15)

M = γ0 + γ1 X + γ2 Z + γ3 XZ + ε2 = γ0 + (γ1 + γ3 Z)X + γ2 Z + ε2

(3.16)

where η2 is a latent variable representing the outcome variable ANX, M is a mediating variable (crack cocaine use frequency in the last 30 days), X is an independent/exposure variable (Ethnic: 1 – white; 0 – non-white), Z is a moderating variable (Age), and the product XZ is the interaction between X and Z. The outcome variable

Crack (M) γ1

β1

Ethnic (X) γ2

β2 β3

Age (Z) γ3

β4

ANX (η2)

Ethnic*Age (XZ)

Figure 3.7 6

y5

ε5

y2

ε2

y8

ε8

y11

ε11

y14

ε14

y17

ε17

Moderated mediation analysis.

The outcome variable (Y) can be either observed or latent in a moderated mediation model. However, in the current version of Mplus, the mediating variable (M) and independent variable (X) can’t be latent all the time. M can be latent if the moderating variable (Z) only interacts with X; and X can be latent if Z only interacts with M (Muthén and Asparouhov 2015).

STRUCTURAL EQUATION MODELS

155

η2 is a linear function of M, X, Z, and XZ, while M is a linear function of X, Z, and XZ. Substituting Eq. (3.16) into Eq. (3.15), we have ANX = β0 + β1 γ0 + β1 (γ1 + γ3 Z)X + β1 γ2 Z + β1 ε2 + β3 Z + (β2 + β4 Z)X + ε1

(3.17)

where (β2 + β4 Z) and β1 (γ1 + γ3 Z) represent the direct and indirect effects of X on ANX (η2 ), respectively, each of which involves the moderating variable Z. That is, both the direct effect of predictor X and its indirect effect via mediator M on the outcome η2 are moderated by variable Z. In the following Mplus program, we test the significance of the direct and indirect effects of Ethnic (X) on Anxiety (ANX) at specific values of the moderating variable Age (Z). Mplus Program 3.11 TITLE: Case1(XZ)moderated mediation effect DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES=Y3 Y6 Y9 Y12 Y15 Y18 M X Z XZ; DEFINE: M=Crack; X=Ethnic; Z=Age; CENTER Z(GRANDMEAN); XZ=X*Z; ANALYSIS: ESTIMATOR = MLR; MODEL: ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; ANX on M(B1) X(B2) Z(B3) XZ(B4); M on X(G1) Z(G2) XZ(G3); MODEL CONSTRAINT: NEW(DirLow DirMean DirHigh IndirLow IndirMean IndirHigh); DirLow=B2+B4*(-10)); DirMean=B2; DirHigh=B2+B4*(10); IndirLow=B1*(G1+G3*(-10)); IndirMean=B1*G1; IndirHigh=B1*(G1+G3*10);

To simplify the variable notations in the model, new variables M, X, Z, and interaction XZ between X and Z are created in the DEFINE command. To make interpretation

156

STRUCTURAL EQUATION MODELING

of parameter estimates easier, the CENTER statement with the GRANDMEAN option is used in the DEFINE command to center the moderating variable Z around its grand (overall) mean (Z) by subtracting Z from each individual score of Z.7 The robust ML estimator MLR is used for model estimation. The β1 – β4 coefficients are labeled B1 – B4 and the γ1 – γ3 coefficients are labeled G1 – G3, respectively, in the MODEL command. In the MODEL CONSTRAINT command, six new parameters (three direct effects and three indirect effects of X on ANX) are defined at specific values of the moderating variable Z (Age), using (β2 + β4 Z) and β1 (γ1 + γ3 Z) as shown in Eq. (3.17). The mean value and standard deviation (SD) of Z (Age) in the sample are 30.41 and 9.22. We use the MODEL CONSTRAINT command to test the significance of the direct and indirect effects of X on ANX at three specific values of the centered measure ages: −10, 0, and 10, corresponding to the original age of 20.41, 30.41, and 40.41, respectively, which are about one SD below the mean age, the mean age, and about one SD above the mean age, respectively. The model fits the data very well: RMSEA = 0.006 (90%CI: 0.042, 0.090), close-fit test P-value = 0.122, CFI = 0.933, TLI = 0.903, and SRMR = 0.041. Selected model results are shown in Table 3.10. The main effect (0.019, P = 0.003) of M on ANX and the interaction effect of XZ on ANX (0.029, P = 0.049) are positive and statistically significant. The direct and indirect effect estimates are shown in the New/Additional Parameters section in Table 3.10. The direct effect DIRHIGH is statistically significant (0.563, P = 0.006); that is, the effect of X on ANX is positive and statistically significant among people who were age 39.63 or one SD (i.e. 9.22) above the average age (30.41). None of the indirect effects is statistically significant. Again, with the MODEL CONSTRAINT command, the SEs of indirect effects were estimated using the multivariate delta method (Sobel 1982). To review how the direct and indirect effects of X vary according to the values of the moderate Z, the following Mplus program generates plots of distributions of the direct and indirect effects and 95% CIs at a range of the moderating variable Z. Mplus Program 3.12 TITLE: Case1(XZ) moderated mediation effect: Creating a plot of symmetric confidence intervals of direct and indirect effects. DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES=Y3 Y6 Y9 Y12 Y15 Y18 M X Z XZ; DEFINE: M=Crack; X=Ethnic; Z=Age; CENTER Z(GRANDMEAN); XZ=X*Z; 7 If a variable that is involved in an interaction does not have a meaningful 0, it should be recoded as a deviation from its mean (i.e. centered) in order to make its main effect interpretable (Hox 1994).

STRUCTURAL EQUATION MODELS

157

Table 3.10 Selected Mplus output: testing the significance of direct and indirect effects at specific values of a moderating variable. MODEL RESULTS … Two-Tailed P-Value

Estimate

S.E.

Est./S.E.

0.019 0.295 -0.013 0.029

0.007 0.176 0.012 0.015

2.944 1.673 -1.153 1.967

0.003 0.094 0.249 0.049

-1.981 0.134 -0.053

2.052 0.170 0.183

-0.966 0.792 -0.289

0.334 0.428 0.773

New/Additional Parameters DIRLOW 0.004 DIRMEAN 0.295 DIRHIGH 0.585 INDIRLOW -0.028 INDIRMEA -0.038 INDIRHIG -0.048

0.246 0.176 0.213 0.064 0.042 0.045

0.016 1.673 2.752 -0.439 -0.902 -1.074

0.987 0.094 0.006 0.661 0.367 0.283

ANX M X Z XZ

ON

M

ON X Z XZ

…

ANALYSIS: ESTIMATOR = MLR; MODEL: ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; ANX on M(B1) X(B2) Z(B3) XZ(B4); M on X(G1) Z(G2) XZ(G3); MODEL CONSTRAINT: PLOT(dir ind); LOOP(Z, -10, 10, 1); dir=B2+B4*Z; ind=B1*(G1+G3*Z); PLOT: Type=plot3;

The LOOP option in the MODEL CONSTRAINT command is used in conjunction with the PLOT option to generate plots in which the moderating variable Z (Age) is the x-axis and the effect is the y-axis. The direct and indirect effect of X on ANX

158

STRUCTURAL EQUATION MODELING

DIR

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –0.1 –0.2 –0.3 –0.4 –0.5 –10 –9 –8 –7 –6 –5 –4 –3 –2 –1

Figure 3.8

0 Z

1

2

3

4

5

6

7

8

9

10

MLR symmetric CI of the direct effect of X on ANX.

are named dir and ind in the LOOP option and calculated using the path coefficient labels (e.g. B1, B2, B4, G1, and G3) at specific values of the moderating variable Z. Because the SD of variable Z is 9.22, we set the low/high limits of variable Z in the LOOP statement8 to −10 and 10, respectively, with an increment unit of 1. Mplus produces plots of both the direct and indirect effects. The plot of the indirect effect of X on ANX (not reported here) shows that the 95% CIs of the effect cover 0 at any value of the moderating variable Z, indicating the indirect effect was not statistically significant. However, the direct effect of X on ANX was significantly moderated by Z: i.e. the older the age, the larger the effect. The plot of the direct effect of X on anxiety is shown in Figure 3.8, where the 95% CI of the effect does not cover 0 when the value of Z (Age) is about 1.50 units (years) above the mean of Z (0 in the centered measure or 30.41 in the original measure). In other words, ethnic difference in anxiety score between whites and non-whites increased with age, but the difference was only statistically significant among people who were age 31.91 or older. The plot can be viewed by clicking Plot → Review Plots → Moderation Plots → DIR in the output window. 8

Alternatively, the following statements produce identical results:

MODEL CONSTRAINT: PLOT(dir ind); LOOP(MOD, -10, 10, 1); dir=B2+B4*MOD; ind=B1*(G1+G3*MOD); PLOT: Type=plot3;

where the moderating variable MOD is abbreviation of MODERATE.

STRUCTURAL EQUATION MODELS

3.6.1

159

Bootstrap confidence intervals

It is important to remember that the CIs shown in Figure 3.8 are symmetric CIs based on ML estimator SEs. When the sampling distribution of parameter estimates is not normal, the ML symmetric CIs may result in biased statistical inferences. Mplus enables us to use bootstrap, which is a computer-intensive resampling technique, to test direct and indirect effects. The bootstrap CI of each parameter estimate is generated using the distribution of the parameter estimated from multiple bootstrap resamples. The CI is not necessarily symmetric without any assumption about the shape of the sampling distribution of the parameter estimates. This is important, particularly when the sample size is small. Bootstrap is implemented in the following Mplus program. Mplus Program 3.13 TITLE: Case1(XZ) moderated mediation effect: Creating a plot with bootstrap non-symmetric confidence intervals. DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES=Y3 Y6 Y9 Y12 Y15 Y18 M X Z XZ; DEFINE: M=Crack; X=Ethnic; Z=Age; CENTER Z(GRANDMEAN); XZ=X*Z; ANALYSIS: ESTIMATOR = ML; BOOTSTRAP=10000; MODEL: ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Depression; ANX on M(B1) X(B2) Z(B3) XZ(B4); M on X(G1) Z(G2) XZ(G3); MODEL CONSTRAINT: PLOT(dir ind); LOOP(Z, -10, 10, 1); dir=B2+B4*Z; ind=B1*(G1+G3*Z); PLOT: Type=plot3; OUTPUT: CINTERVAL (BOOTSTRAP);

160

STRUCTURAL EQUATION MODELING

DIR

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –0.1 –0.2 –0.3 –0.4 –0.5 –0.6 –0.7 –10 –9 –8 –7 –6 –5 –4 –3 –2 –1

Figure 3.9

0 Z

1

2

3

4

5

6

7

8

9

10

Non-symmetric bootstrap CI of the direct effect of X on ANX.

When the BOOTSTRAP option of the ANALYSIS command is used in conjunction with the CINTERVAL option of the OUTPUT command, Mplus provides non-symmetric bootstrap CIs for each parameter estimate in Mplus output. As BOOTSTRAP is not available for the MLM, MLMV, MLF, and MLR estimators, the estimator used in the program is ML. The CINTERVAL (BOOTSTRAP) statement in the OUTPUT command requests bootstrap CIs. CIs using bias corrected bootstrap SEs can be obtained by using BCBOOTSTRAP in place of BOOTSTRAP. While the parameter estimates remain the same as those estimated from Mplus Program 3.12, the 95% CIs of the parameters are slightly different. The distribution of the direct effect of X on ANX is plotted in Figure 3.9, where the CIs are non-symmetric bootstrap CIs at the values of the moderating variable Z ranged from −10 to 10. When the value of the moderating variable Z (Age) is about 2.25 units (years) above its mean (0 in the centered measure or 30.41 in the original measure of age), the non-symmetric bootstrap CIs do not cover 0 anymore. This indicates that the ethnic difference in anxiety score was only statistically significant among people who were age (30.41 + 2.25) = 32.66 or older; and the older, the larger the effect. The cutoff value is slightly larger than the corresponding figure (30.41 + 1.50) = 31.91 produced by the MLR symmetric CIs in Figure 3.8. We would consider the bootstrap non-symmetric CIs more precise.

3.6.2

Estimating counterfactual-based causal effects in Mplus

When the MODEL INDIRECT command with the MOD option is used to estimate moderated mediation effects, Mplus provides estimates of counterfactual-based causal

STRUCTURAL EQUATION MODELS

161

effects with two alternative total effect decompositions at specific values of the moderating variable (Muthén and Asparouhov 2015; Muthén and Muthén 1998–2017): Total effect = Pure natural direct effect + Total natural indirect effect

(3.18)

Total effect = Total natural direct effect + Pure natural indirect effect

(3.19)

In the literature, the expression of the two decompositions are further abbreviated (Muthén and Asparouhov 2015; Muthén et al. 2016) as: TE = PNDE + TNIE

(3.20)

TE = TNDE + PNIR

(3.21)

where TE stands for total effect, PNDE for pure natural direct effect, TNIE for total natural indirect effect, TNDE for total natural direct effect, and PNIR for pure natural indirect effect. The total effect decomposition defined in Eq. (3.18) or (3.20) is most often considered in the literature (Muthén and Asparouhov 2015; Muthén et al. 2016). In Eqs. (3.18)–(3.21), the total effect decompositions are expressed in the terminologies of the counterfactual literature. Counterfactual-based mediation analysis is a new advance in mediation analysis that gives broader definitions of direct and indirect effects and allows exposure-mediator interaction in mediation analysis (Pearl 2001; VanderWeele and Vansteelandt 2009; Imai et al. 2010). In recent years, this analytical approach has been generalized in the SEM framework so that counterfactual-based causal effects can be estimated in the presence of latent variables (Muthén and Asparouhov 2015; Muthén et al. 2016; Muthén and Muthén 1998–2017). The counterfactual-defined causal effect analysis is a challenging topic. However, in situations (e.g. our example of a Case 1 (XZ) moderated mediation model) where both outcome and mediating variables are continuous and there is no exposuremediator interaction (e.g. XM interaction), the counterfactual-defined direct and indirect effects coincide with the effects analyzed in the traditional mediation analysis, assuming there are no unmeasured-confounding conditions, such as exposureoutcome confounders, mediator-outcome confounders, and exposure-mediator confounders (Muthén and Asparouhov 2015; Muthén et al. 2016). In the following program, we estimate our Case 1 (XZ) moderated mediation model using the MODEL INDIRECT command with the MOD option. Mplus Program 3.14 TITLE: Case1(XZ) moderated mediation effect: Using MODEL INDIRECT command with MOD option. DATA: FILE = BSI_18.dat; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack id; MISSING= ALL (-9); USEVARIABLES=Y3 Y6 Y9 Y12 Y15 Y18 M X Z XZ; DEFINE:

162

STRUCTURAL EQUATION MODELING

M=Crack; X=Ethnic; Z=Age; CENTER Z(GRANDMEAN); XZ=X*Z; ANALYSIS: ESTIMATOR = ML; BOOTSTRAP=10000; MODEL: ANX BY Y3 Y6 Y9 Y12 Y15 Y18; !Anxiety; ANX on M X Z XZ; M on X Z XZ; MODEL INDIRECT: ANX MOD M Z(-10, 10, 1) XZ X; OUTPUT: CINTERVAL(BOOTSTRAP); PLOT: Type=plot3;

The outcome variable ANX is specified on the left side of the MOD option in the MODEL INDIRECT command, and the other variables are on the right side in the following order: mediating variable (M), moderating variable (Z), interaction (XZ), and exposure variable (X). Mplus requires that the variables must be put in such an order. To estimate specific direct and indirect effects of the exposure variable X on outcome ANX corresponding to a range of specific values of the moderating variable Z, the numbers in parentheses following the moderating variable Z are the low/high limits of Z with an incremental unit for the x-axis in the effect plots. In our example, we set the low/high limits to −10 and 10 with an increment of 1, which is about one SD (i.e. 9.22) below and above the mean value of the centered measure of Age (Z). As a matter of fact, two values should also be specified in parentheses following the exposure variable X, like X(X1, X0), where X0 is a reference value to which X1 is compared (Muthén et al. 2016). In our example, X is a dichotomous variable (1 – white; 0 – non-white), and thus the specification of X(1, 0) is simplified as X by default. Bootstrap with 10 000 resamples was used for model estimation. Mplus produces bootstrap SEs for the parameter estimates, as well as the non-symmetric bootstrap CIs in the plots of effects versus the moderating variable Z. We report the bootstrap direct and indirect effects of X on ANX and their non-symmetric bootstrap 95% CIs corresponding to three selected specific values of the moderating variable Z = −10, 0, and 10 (i.e. one SD below the mean age, the mean age, and one SD above the mean age) in Table 3.11, where the total effect is decomposed using two different approaches. The bootstrap direct and indirect effects produced by the two total effect decomposition approaches are identical, and they are also identical to the corresponding effects produced by traditional approach in Mplus Program 3.11. In addition, Mplus Program 3.14 generated plots (not reported here) of the pure natural direct effect and total natural indirect effect that are identical to the plots generated by Mplus Program 3.13.

Table 3.11 Selected Mplus output: bootstrap estimates of the direct and indirect effects and their non-symmetric CIs at a specific value of moderating variable Z. CONFIDENCE INTERVALS OF TOTAL, INDIRECT, AND DIRECT EFFECTS BASED ON COUNTERFACTUALS (CAUSALLY-DEFINED EFFECTS) Lower .5%

Lower 2.5%

Lower 5%

Estimate

Upper 5%

Upper 2.5%

Upper .5%

Effects from X to ANX for Z = -10.000 (total effect decomposition by Eqs. (3.18) and (3.20)) Tot natural IE Pure natural DE Total effect

-0.244 -0.888 -0.943

-0.180 -0.599 -0.645

-0.150 -0.472 -0.519

-0.028 0.004 -0.024

0.076 0.395 0.392

0.098 0.469 0.476

0.146 0.621 0.631

0.098 0.469 0.476

0.146 0.621 0.631

Other effects (total effect decomposition by Eqs. (3.19) and (3.21)) Pure natural IE Tot natural DE Total effect

-0.244 -0.888 -0.943

-0.180 -0.599 -0.645

-0.150 -0.472 -0.519

-0.028 0.004 -0.024

0.076 0.395 0.392

… Effects from X to ANX for Z = 0.000 (total effect decomposition by Eqs. (3.18) and (3.20)) Tot natural IE Pure natural DE Total effect

-0.187 -0.284 -0.353

-0.140 -0.103 -0.164

-0.119 -0.028 -0.080

-0.038 0.295 0.257

0.022 0.579 0.560

0.035 0.636 0.621

0.063 0.743 0.727

0.035 0.636 0.621

0.063 0.743 0.727

Other effects (total effect decomposition by Eqs. (3.19) and (3.21)) Pure natural IE Tot natural DE Total effect

-0.187 -0.284 -0.353

-0.140 -0.103 -0.164

-0.119 -0.028 -0.080

-0.038 0.295 0.257

0.022 0.579 0.560

… Effects from X to ANX for Z = 10.000 (total effect decomposition by Eqs. (3.18) and (3.20)) Tot natural IE Pure natural DE Total effect

-0.211 0.026 -0.083

-0.163 0.145 0.075

-0.137 0.216 0.145

-0.048 0.585 0.537

0.015 0.937 0.901

0.028 1.010 0.969

0.058 1.126 1.094

0.028 1.010 0.969

0.058 1.126 1.094

Other effects (total effect decomposition by Eqs. (3.19) and (3.21)) Pure natural IE Tot natural DE Total effect

-0.211 0.026 -0.083

-0.163 0.145 0.075

-0.137 0.216 0.145

-0.048 0.585 0.537

0.015 0.937 0.901

164

STRUCTURAL EQUATION MODELING

3.7 Using plausible values of latent variables in secondary analysis In Section 2.10 of Chapter 2, we demonstrated how to impute and save multiple sets of plausible values of latent variables. In this section, we discuss and demonstrate how to use the saved imputed plausible values for secondary analysis. Compared to the often-used total score or factor scores, plausible values of constructs produce more accurate estimates for the relationships between latent variables and other variables (Asparouhov and Muthén 2010d). Literature review shows different ways of applying plausible values (PVs) in secondary analysis (von Davier et al. 2009): (i) the PV-1 approach uses only the first set of plausible values. (ii) The PV-W (W for wrong) approach uses each of the multiple sets of plausible values separately for parameter estimation, and then reports the simple average of the parameter estimates, including SEs. And (iii) the PV-R (R for right) approach treats plausible value data sets as multiple imputation (MI) data sets (Mislevy 1991; Asparouhov and Muthén 2010d), and analyzes them using Rubin’s (1987) method. The first two approaches are incorrect applications of plausible values. Although plausible values are randomly draws of a posterior distribution of factor scores that MCMC generated, analytical shortcuts such as PV-1 and PV-W produce biased statistical inference and should not be used. Only PV-R should be used for application of the plausible values for secondary data analysis. To implement PV-R, multiple plausible value data sets are analyzed using Rubin’s (1987) method. That is, the same model is implemented using each of the m plausible value data sets; and then the m sets of results are combined for inference. A MI parameter estimate is the mean b of the bi values estimated from each of the m (five or more) sets of plausible values, and its SE is: √ [ ] √ ) ( )∑ ( √1 ∑ 1 1 √ V(bi ) + 1 + (bi − b)2 m i m m−1 i

(3.22)

where the first term in the square root is the average of the variance of the parameter bi estimated from each plausible value set; and the second term is the variance of the m plausible value-based parameter estimates (bi ) multiplied by a correction factor (1 + 1/m). In Section 2.10 of Chapter 2, we imputed and saved five plausible value data sets named PVSimp1–PVSimp5.dat, and those files are listed in the file PVSimplist.dat through which the plausible value data sets can be retrieved for further analysis. For simplicity, we only use plausible values of two latent variables/factors (DEP and ANX) in the structural equation model implemented in the following Mplus program.

STRUCTURAL EQUATION MODELS

165

Mplus Program 3.15 TITLE: SEM Using Plausible Values DATA: File=PVSimplist.dat; Type=imputation; VARIABLE: NAMES = Y1-Y18 Gender Ethnic Age Edu Crack SOM DEP ANX ID; USEVARIABLES = DEP ANX Gender Ethnic Age Hsch; DEFINE: Hsch=0; if Edu>4 then Hsch=1; ANALYSIS: ESTIMATOR = MLR; MODEL: DEP on ANX Gender Ethnic Age Hsch; ANX on Gender Ethnic Age Hsch; OUTPUT: STDYX;

The FILE command retrieves the file PVSimplist.dat generated from Mplus Program 2.20. The file is not a data file, but a file containing a list of the names of five plausible value data sets (PVSimp1.dat, PVSimp2.dat, PVSimp3.dat, PVSimp4.dat, and PVSimp5.dat) that were imputed from Mplus Program 2.20. Mplus treats the five data sets as five MI data sets, and thus TYPE=IMPUTATION must be specified in the DATA command. The plausible values of the latent variables DEP and ANX are used for modeling in the program. Replacing the two latent variables with their plausible values actually replaces the measurement model in the structural equation model with two “observed” variables; therefore, the model reduces to a path analysis model. The model is estimated using each of the five plausible value data sets, respectively; then parameter estimates are averaged over the five set of analyses, and SEs are computed using Rubin’s (1987) method. Selected model results are shown in Table 3.12. When analyzing MI data sets, all model fit statistics/indices and model parameters are estimated multiple times and then averaged. As our model is a saturated model with 0 degree of freedom, model fit statistics/indices are not available. The parameter estimates are similar to those from Mplus Program 3.5: ANX has a significant positive effect on DEP (0.795, p < 0.001), Gender has a significant negative effect on ANX (−0.543, p < 0.001), and Ethnic has a positive effect on ANX (0.543, p = 0.011). Note that Mplus produces a new column called Rate of Missing in the model results (see the last column in Table 3.12). It does not refer to the rate of missing values in the data, but to the fraction of missing information (FMI) (Schafer 1997). The FMI is a very important concept in MI theory. It is for a model parameter rather than a variable, and it varies based on the model and number (m) of imputations. As m approaches infinity, the FMI is just the fraction of the between-imputation variance of the parameter estimates in the total variance (i.e. the between-imputation variance plus the within-imputation variance) (see Appendix 3.B), indicating the ratio of

166

STRUCTURAL EQUATION MODELING

Table 3.12

Selected Mplus output: SEM using plausible values.

MODEL RESULTS … MODEL RESULTS

Estimate DEP

ON ANX GENDER ETHNIC AGE HSCH

ANX ON GENDER ETHNIC AGE HSCH

S.E.

Est./S.E.

Two-Tailed P-Value

Rate of Missing

0.784 0.038 0.177 0.005 -0.044

0.054 0.111 0.125 0.006 0.168

14.549 0.345 1.414 0.903 -0.261

0.000 0.730 0.157 0.366 0.794

0.552 0.498 0.337 0.393 0.459

-0.521 0.559 0.013 -0.128

0.141 0.218 0.008 0.263

-3.689 2.563 1.638 -0.488

0.000 0.010 0.101 0.625

0.103 0.374 0.154 0.177

information lost due to the missing data to the total information; that is probably why the FMI is called “the fraction of missing information” (Rubin 1987). The FMI is a useful measure of the impact of missing data on the quality of estimation of parameter estimates. It is a key factor in determining the relative efficiency (RE) of parameter estimates based on MI data (Rubin 1987). For a finite number of imputations, Rubin (1987) used the FMI to define the RE of MI: RE = (1 + γ∕m)−1

(3.23)

where γ is the FMI and m is the number of imputations. The recommended minimum amount of RE necessary is 90–95% (Rubin 1987). This formula leads to the conclusion that just 3–5 imputations would be sufficient to obtain excellent results. For example, with a low FMI (e.g. γ = 0.20), three imputations (m = 3) would result in a RE = 0.94. Even with a high γ = 0.50, five imputations (m = 5) would still yield 91% efficiency (RE = 0.91). While Rubin (1987) considered five or fewer imputations sufficient, Schafer (1999) recommended 5–10 imputations and saw no practical benefit in using more than 5–10 imputations unless the FMI is unusually high. In fact, even 10 imputations may not be sufficient. As the SE of MI parameter estimates is computed based on both within- and between-dataset variances (see Eq. (3.22)), when the number of imputations (data sets) is small, the SE of the parameter estimate – and hence, p-value – would not be stable because the estimate of the between-dataset variance would not be stable. Therefore, people argue that the number of imputations should not be too small, especially if the FMI is high. As their Monte Carlo simulations showed that statistical power diminished when m is small, Graham et al. (2007) recommended 20 imputations for FMI of 0.10–0.30, and 40 imputations for FMI of

STRUCTURAL EQUATION MODELS

167

0.50. In addition, both Bodner (2008) and Royston and White (2011) recommended a simplified rule of thumb: the number of imputations should be similar to the percentage of cases with missing values. For example, if 30% of the cases in the sample have missing values in any variable included in the model, then 30 imputed data sets are needed. Of course, with a large number of imputation data sets, model estimation becomes very time consuming.

3.8 Bayesian structural equation modeling (BSEM) In Section 1.4 of Chapter 1, we discussed some advantages of BSEM, and we demonstrated application of Bayesian CFA (BCFA) using Mplus Program 2.19. Covariates (e.g. gender, ethnicity, age, education) can be readily added to the program to predict the latent variables; as such, a Bayesian structural equation model will be estimated. However, when the measurement part of a structural equation model involves multiple latent variables, each of which has multiple items, there will be too many parameters to estimate, particularly when a Bayes estimator is used. As mentioned earlier, we can make the model parsimonious by first imputing the plausible values of the latent variables using BCFA, and then replacing the latent variables in the structural equation model with the plausible values. As such, the structural equation model is reduced to a path analysis model. Mplus Program 3.15 shows an example of implementing such a path analysis model using a robust MLR. In the program, five plausible value data sets were analyzed simultaneously using Rubin’s (1987) method. Unfortunately, multiple imputation data sets can’t be simultaneously analyzed with a Bayes estimator in the current version of Mplus. In this section, for the purpose of demonstration, we will run a Bayesian path analysis model using the first data set (PVSimp1.dat) of the plausible values of latent variables imputed from Mplus Program 2.20. The model is the same as the one shown in Figure 3.3 except that the latent variables DEP and ANX are replaced with the plausible values of the two latent variables. We will focus on evaluating the indirect effect of Gender on DEP through ANX. The informative priors are used for the slope coefficients involved in the indirect effects (i.e. the slope of regressing DEP on ANX, and the slope of regressing ANX on Gender). Informative priors can be obtained from literature reviews, meta-analyses, findings in previous studies, or best theoretical guesses for the corresponding parameters. Informative priors with a reasonable and finite range are very important since they can substantially affect model estimation and results. We estimate the slopes and their SEs from the path analysis model using a MLR estimator and treat the parameter estimates as informative priors of the two slopes in the Bayesian path analysis model. Mplus Program 3.16 TITLE: BSEM with informative priors DATA: File=PVSimp1.dat; VARIABLE: NAMES=Y1-Y18 Gender White Age Edu Crack SOM DEP ANX ID;

168

STRUCTURAL EQUATION MODELING

MISSING= *; USEVARIABLES=DEP ANX Gender White Age Hsch; DEFINE: Hsch=0; if edu>4 then Hsch=1; ANALYSIS: ESTIMATOR = BAYES; PROCESS=4; BITERATIONS=(10000); !POINT=median; !Default; !CHAIN=2; !Default; MODEL: DEP on ANX(a) Gender White Age Hsch; ANX on Gender(b) White Age Hsch; MODEL PRIORS: a ∼ N (0.786, 0.007); !Prior variance is 4 times larger than the ML estimate of slope !variance; b ∼ N (-0.504, 0.083); MODEL CONSTRAINT: NEW(indirect); indirect = a*b; OUTPUT: TECH8 STDY; PLOT: TYPE = PLOT2;

A minimum number of 10 000 iterations is specified in the BITERATIONS command for Bayesian estimation. By default, the Mplus Bayesian point estimate for a parameter is the median. The default can be changed by using the POINT option (e.g. POINT=MEAN or POINT=MODE). Considering that the posterior distribution of the indirect effect under study may be non-normal, we chose the default option for point estimate. The slope coefficient of regressing DEP on ANX is labeled a, and the slope coefficient of regressing ANX on Gender is labeled b. The ML point estimate (SE) is 0.786 (0.042) for slope a and − 0.504 (0.144) for slope b. A normal prior is used for both slopes. For the purpose of demonstration, the means of the ML estimates are treated as “previous” findings and used as the means of the normal priors in Bayesian estimation. The variances of the normal priors are set up as 4*(0.042)2 = 0.007 for slope a and 4*(0.144)2 = 0.083 for slope b, respectively. The prior variances are set up four times larger than the ML slope variance to take into account possible differences between priors and current study (Yuan and MacKinnon 2009). The MODEL CONSTRAINT command is used to estimate the indirect effect. When a ML estimator is used for model estimation, the indirect effect is computed by default using a Delta method that assumes normal distribution of the indirect effects. With a Bayes estimator, MODEL CONSTRAINT provides non-symmetric Bayesian credibility intervals of the indirect effect. The option TECH8 in the OUTPUT command provides a Kolmogorov-Smirnov (KS) test and potential scale reduction (PSR) to examine the convergence of multiple Markov Chain Monte Carlo (MCMC) chains run in parallel.

STRUCTURAL EQUATION MODELS

169

The PLOT2 option in the PLOT command provides a variety of graphics to view the MCMC iteration history. The TECH8 output shows that the convergence criterion (PSR < 1.10) was fulfilled rather quickly and the highest PSR is as low as 1.004 after100 iterations and does not bounce over more iterations. The model fits the data very well: the 95% (CI -15.452, 14.749) of the difference between observed and model-generated data χ2 values centers around 0; and the posterior predictive P value PPP = 0.487 (Table 3.13). In addition to evaluating PSR and PPP, the convergence of Bayesian estimation is also often examined visually by inspecting three graphical summaries of the parameters: Bayesian posterior parameter trace plots, Bayesian autocorrelation plots, and Bayesian posterior parameter distributions. Although Mplus produces such graphical summaries for all parameters, we only focus on evaluation of the indirect effect of Gender on DEP for the purpose of demonstration. Figure 3.10a shows a trace plot of two chains for the indirect effect of Gender on DEP. It plots the simulated parameter values against the iteration number, connecting consecutive values with a line for each of the two MCMC chains, colored red and blue, respectively. In the trace plot, the first half of the 10 000 iterations are considered a burn-in period, denoted by a vertical line at 5000 iterations; only the second half of the iterations are used to generate the desired or target posterior distribution for parameter estimation. In the trace plot, each of the two chains has reached equilibrium quickly; in other words, each chain mixes very quickly. The mixing time of a Markov chain is the time until the chain, excluding the burn-in period, has reached its stationary distribution. Mixing is an important property of MCMC convergence. In addition, mixing also refers to the similarity of posterior distributions between chains. If the between-chain distributions of the simulations are identical, the PSR equals 1; then we say the chains have fully mixed. Another way to inspect convergence is to check the autocorrelations between parameter estimates lagged by some number of iterations. A lag k correlation refers to the correlation between pairs of parameter estimates k iterations apart. A small autocorrelation (e.g. 0.1 or less) is desirable, indicating that parameter values in the stationary distribution can be thought of as approximately independent/random draws from the posterior distribution, and not reliant on initial values in the chain. In contrast, high autocorrelations suggest slow chain mixing and, usually, slow convergence to the posterior distribution. If the autocorrelation is high for small lags but decreases with increasing lags, we can use the THIN=k option in the ANALYSIS command to thin out the chain and only keep every kth iteration to estimate autocorrelation. For the well-mixed chain in our example, autocorrelation is negligible even in small lags. Figure 3.10b shows an autocorrelation plot for the indirect effect of Gender on DEP via ANX in Chain 1 (a similar plot is available for Chain 2 from the Mplus plot). The autocorrelation is close to 0 starting at lag 1, indicating that the parameter estimates across iterations are almost uncorrelated. Figure 3.10c shows the density plot or the Bayesian posterior parameter distribution for the indirect effect of Gender on DEP in our path analysis example. As mentioned before, of the total 10 000 iterations in the Bayesian estimation, the first 5000 burn-in interactions are excluded, and each of the two chains remains 5000 iterations;

170

STRUCTURAL EQUATION MODELING Selected Mplus output: Bayesian path analysis.

Table 3.13

MODEL FIT INFORMATION Number of Free Parameters

13

Bayesian Posterior Predictive Checking using Chi-Square 95% Confidence Interval for the Difference Between the Observed and the Replicated Chi-Square Values -15.452

14.749

Posterior Predictive P-Value

0.487

Prior Posterior Predictive P-Value

0.697

Information Criteria Deviance (DIC)

1101.044

Estimated Number of Parameters (pD) Bayesian (BIC)

12.762 1147.098

MODEL RESULTS

Estimate DEP ANX GENDER WHITE AGE HSCH

ON

ANX GENDER WHITE AGE HSCH

ON

Posterior S.D.

One-Tailed P-Value

95% C.I. Lower 2.5% Upper 2.5%

Significance

0.829 0.001 0.295 0.008 -0.127

0.035 0.081 0.116 0.004 0.130

0.000 0.497 0.005 0.041 0.167

0.760 -0.161 0.072 -0.001 -0.379

0.898 0.161 0.527 0.016 0.129

*

-0.596 0.539 0.013 -0.039

0.117 0.189 0.007 0.217

0.000 0.002 0.031 0.430

-0.825 0.174 -0.001 -0.461

-0.366 0.910 0.027 0.388

* *

-0.480 -0.366

0.198 0.321

0.008 0.125

-0.869 -1.005

-0.087 0.275

*

0.317 0.881

0.029 0.081

0.000 0.000

0.267 0.741

0.381 1.059

* *

0.100

0.000

-0.692

-0.299

*

*

Intercepts DEP ANX Residual Variances DEP ANX

New/Additional Parameters INDIRECT

-0.493

thus, the target posterior distribution is approximated based on 2 × 5000 = 10 000 parameter values. Kernel-density estimation (Botev et al. 2010; Muthén 2010) is used to smooth over the parameter values and produce an estimate of the target posterior distribution. The density plot is a summary of the target posterior distribution in which parameter estimates are marked by vertical lines in the forms of a mean,

STRUCTURAL EQUATION MODELS

171

0.04 –0.06 –0.16 –0.26 –0.36 –0.46 –0.56 –0.66 –0.76 10500

9500

10000

9000

8500

8000

7500

7000

6500

6000

5500

5000

4500

4000

3500

3000

2500

2000

1500

1000

0

500

–0.86

Autocorrelation

(a) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 –0.1 –0.2 –0.3 –0.4 –0.5 –0.6 –0.7 –0.8 –0.9 –1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0

Mean = –0.39602, Std Dev = 0.09936 Median = –0.39559 Mode = –0.37595

–0.03

–0.08

–0.13

–0.18

–0.23

–0.28

–0.33

–0.38

–0.43

–0.48

–0.53

–0.58

–0.63

–0.68

–0.73

–0.78

95% Lower Cl = –0.59400 95% Upper Cl = –0.20159

–0.83

Count

Lag (b)

Estimate (c)

Figure 3.10 (a) Bayesian trace plot of the indirect effect of Gender on DEP through ANX. (b) Bayesian-autocorrelation plot (Chain 1) of the indirect effect of Gender on DEP through ANX. (c) Bayesian posterior distribution of the indirect effect of Gender on DEP through ANX.

172

STRUCTURAL EQUATION MODELING

median, or mode of the posterior distribution. The Bayesian posterior parameter distribution does not have to follow a normal distribution that ML uses for statistical inference. Instead, the Bayesian credibility interval uses the 2.5 and 97.5 percentiles of the posterior distribution for statistical inference (Muthén 2010). Note that the density plot is not a formal means of diagnosing model convergence, as are the autocorrelation and trace plots. However, unexpected peaks or strange shapes in the posterior density can be a sign of poor convergence. The unimodal of a density function indicates a good simulation of the posterior distribution. The path coefficient estimates and their significance tests are reported in the middle panel of Table 3.13. The first column gives the point estimate, which by default is the median of the posterior distribution. Bayesian statistical inferences are made based on credible or credibility intervals (CIs) (Gelman et al. 2004; Gill 2008; Muthén 2010). Unlike the ML CI (i.e. Estimate ±1.96 × SE), which assumes a symmetric distribution, the Bayesian credible interval is based on the percentiles of the posterior distribution, allowing for a strongly skewed distribution. Columns 5 and 6 in Table 3.13 show the Bayesian posterior 95% credible interval (Bayesian 95% CI) that is established by the 2.5 and 97.5 percentiles in the posterior distribution. The Bayesian 95% CI is interpreted as the interval that contains the population parameter with 95% probability. If the Bayesian 95% CI does not cover 0, it indicates the corresponding parameter estimate is significantly different from 0 at 0.05 level. The statistically significant parameter estimates are marked by asterisk * in the last column. Note that a one-tailed p-value is given in Column 3. This p-value is also based on the posterior distribution. For a positive/negative coefficient estimate, the P-value refers the proportion of the posterior distribution that is below/above 0 (Muthén 2010). The indirect effect of Gender on DEP through ANX-0.493, 95% (CI: -0.692, -0.299) is reported in the New/Additional Parameters section at the bottom of Table 3.13. In addition to using informative priors for model estimation, we also tried model estimation with non-informative priors (Mplus default N∼(0, ∞) for the slope coefficient) and got a very similar indirect effect estimate (-0.514, 95% CI: -0.736, -0.298 ). However, using informative priors slightly reduced the width of the credibility interval for the indirect effect estimate. Our results provide evidence that incorporating prior knowledge into Bayesian estimation produces greater accuracy.

STRUCTURAL EQUATION MODELS

Appendix 3.A

173

Influence of measurement errors

Measurement errors are known to behave non-randomly, randomly, or both. When the non-random component of the error in a variable is the same for all respondents, it affects the central tendency or the mean of the response distribution, but not the relation of the variable with other variables. However, it is difficult to deal with non-random errors that vary across individuals. Random errors, on the other hand, increase unexplainable variation and can obscure potential relations among variables (Alwin 1989; Alwin and Krosnick 1991). SEM typically assumes random measurement errors. Here we briefly review the effect of random measurement errors on regression analysis. Appendix 2.B shows that reliability is defined as the extent to which the variance of an observed variable is explained by the true scores that the variable is supposed to measure: ρ=

Var(𝛿) Var(x) − Var(𝛿) =1− Var(x) Var(x)

(3.A.1)

where Var(𝛿) and Var(x) are the variances of the random measurement error 𝛿 and the observed variable x, respectively. Reliability 𝜌 less than 1.0 indicates the existence of measurement error. However, imperfect reliability and measurement error in dependent and independent variables have different effects in linear regression analysis (Werts et al. 1976). Measurement error in a dependent variable does not bias the unstandardized regression coefficients because the measurement error is absorbed into the disturbance term; but it will bias the standardized regression coefficients because the weights of the standardized regression coefficients are a function of the standardized deviations of both the dependent and independent variables. Measurement errors in independent variables are problematic in regression analysis. In a regression model, measurement error in an independent variable biases the least-square estimate of the slope coefficient downward. The magnitude of the bias depends on the reliability of the variable, with low reliability causing greater bias in the regression coefficient. Let’s use a simple regression y = bx + e as an example, assuming y = η + e and x = 𝜉 + 𝛿, where η and 𝜉 are the true scores of y and x, respectively, and the measurement errors ε and 𝛿 are independent of each other. The covariance between x and y is: Cov(x, y) = Cov(ξ + δ, η + ε) = Cov(𝜉, η)

(3.A.2)

and the regression slope coefficient b is equal to b=

Cov(x, y) Cov (𝜉, η) Var(𝜉) = = βρ Var(x) Var(𝜉) Var(x)

(3.A.3)

where 𝛽 = Cov(𝜉, η)/Var(𝜉) is the slope coefficient of regressing the “true” dependent variable η on the “true” independent variable 𝜉, and 𝜌 = Var(𝜉)/Var(x) is the attenuation factor that is the reliability of x. When 𝜌 is perfect (i.e. 𝜌 = 1.0), b = β; otherwise, b is attenuated downward. If two or more independent variables in a multiple linear regression have measurement errors, the effects of the measurement errors on estimation of the regression

174

STRUCTURAL EQUATION MODELING

coefficients are complicated. A coefficient may be biased either downward or upward, and the signs of the coefficients may even be reversed (Armstrong 1990; Bohrnstedt and Carter 1971; Cochran 1968; Kenny 1979). According to Allison and Hauser (1991), the bias depends “in complex ways on the true coefficients, the degrees of measurement error in each variable, and the pattern of inter-correlations among the independent variables” (p. 466).

STRUCTURAL EQUATION MODELS

Appendix 3.B

175

Fraction of missing information (FMI)

For a limited number of multiple imputations, FMI γ is defined as (Rubin 1987; Pan and Wei 2016): r + 2∕(df + 3) (3.B.1) γ= r+1 where r is the relative increase in variance due to missing data and df is the degrees of freedom, defined as the following: ) ( 1 2 (3.B.2) df = (m − 1) 1 + r ( ) 1 + m1 B r= (3.B.3) U where U is the within-imputation variance (the average of the squared SE [SEi ] for a parameter estimate over the m plausible value data sets); and B is the betweenimputation variance (variance of a parameter estimate across the multiple plausible value data sets) that captures the estimation variability due to missing data. U is defined as m 1∑ SEi (3.B.4) U= m 1 and B is defined as

1 ∑ (Qi − Q)2 m−1 1 m

B=

(3.B.5)

where Qi is the parameter of interest for the ith imputation in the MI. The total variance (T) is ) ( 1 B (3.B.6) T =U+ 1+ m where the (1/m) is an adjustment for the randomness associated with a finite number (m) of imputations. When m approaches infinity, dfm → ∞ = ∞, then rm→∞ = UB (see Eq. 3.B.3), and then Eq. 3.B.1 becomes γm→∞ =

Bm Tm

(3.B.7)

4

Latent growth modeling (LGM) for longitudinal data analysis 4.1 Introduction In this chapter, we will expand the application of structural equation modeling (SEM) to longitudinal data analysis where subjects are followed up over time with repeated measures of each variable of interest. The distinctive features of longitudinal data include, but are not limited to, the following: (i) there are two sources of heterogeneity – within-subject or intra-subject variation and between-subject or inter-subject variation; (ii) within-subject observations usually are not independent; (iii) between-subject variation may not be constant over time; and (iv) longitudinal data are often incomplete or unbalanced (i.e. the number of repeated measures and time intervals between follow-ups varies by subject). Various new statistical methods have been developed for longitudinal data analysis, among which multilevel modeling (MLM) (Bryk and Raudenbush 1992; Goldstein 1987, 1995; Mason et al. 1983; Raudenbush and Bryk 2002), generalized estimating equations (GEEs) (Diggle et al. 1998; Diggle et al. 2002), and latent growth modeling (LGM) (Chou et al. 1998; Duncan and Duncan 1994; Duncan et al. 2006; McArdle and Anderson 1990; Meredith and Tisak 1990; Muthén 1991; Willett and Sayer 1994) have gained popularity in longitudinal studies. All these approaches enable us to deal with the special features of longitudinal data. However, compared with MLM and GEEs, LGM is a more generalized approach that can readily handle multiple-outcome growth processes and include latent variables in modeling. In this chapter, we will introduce and demonstrate various latent growth models.

Structural Equation Modeling: Applications Using Mplus, Second Edition. Jichuan Wang and Xiaoqian Wang. © 2020 John Wiley & Sons Ltd. Published 2020 by John Wiley & Sons Ltd.

178

STRUCTURAL EQUATION MODELING

In Section 4.2, we discuss basic unconditional LGM, as well as conditional LGM including both time-invariant and time-varying covariates into the model. Section 4.3 extends the modeling to nonlinear outcome growth trajectories. LGM with various polynomial time functions, semiparametric piecewise methods, and free time scores, as well as distal outcomes, are discussed. Both Sections 4.4 and 4.5 are about multiple-growth-process LGM; the former deals with repeated multiple continuous outcomes, and the latter with repeated semi-continuous outcome measures. LGM for repeated categorical outcomes is discussed in Section 4.6. Using individually varying times of observations in LGM is presented in Section 4.7. Finally, the recently developed dynamic structural equation modeling (DSEM) and residual DSEM (RDSEM) are demonstrated in Section 4.8.

4.2 Linear LGM A common practice in the application of LGM is to systematically investigate the features of outcome growth over time, such as the form of the latent growth trajectory (e.g. linear or nonlinear), the initial level of the outcome measure, the rate of outcome change, inter-individual variability in the change, the association between the rate of change and the initial level of outcome, as well as the determinates of trajectory variations. We start with linear LGM in this section.

4.2.1

Unconditional linear LGM

Unconditional linear LGM simply examines the growth trajectory of the outcome under study without including any covariate or other outcome measures in the model. A simple unconditional linear latent growth model is described in Figure 4.1, where y0i –y5i are observed outcomes measured at six different time points (t0 –t5 ); 𝜂 0i and 𝜂 1i are latent intercept and slope growth factors, respectively. The latent intercept growth factor 𝜂 0i represents the initial level-of-outcome measure, and the latent slope growth factor 𝜂 1i represents the rate of outcome change over time. The latent intercept and slope growth factors capture information about the growth trajectory for the ith individual. The observed outcome measures y0i –y5i are treated as the multiple indicators of these two latent growth factors.1 The factor loadings on the intercept growth factor 𝜂 0i are all fixed to 1.0, and the factor loadings on the slope growth factor 𝜂 1i are called time scores. The time scores play three roles. (i) They determine the form (liner or nonlinear) of the growth process. Assuming a linear growth trajectory and equal time intervals between observation time points, the time scores for the model depicted in Figure 4.1 may be set at [0, 1, 2, 3, 4, 5] for time points t0 , t1 , t2 , t3 , t4 , and t5 , respectively. Time scores for a nonlinear growth trajectory will be discussed in next section. (ii) They define the centering point of the growth process. Setting the time score to 0 for t0 would define the first time point (i.e. baseline) t0 as the centering point of the growth process so that the latent intercept growth factor 𝜂 0i would represent the 1 The latent growth factors are not latent variables representing constructs in the sense of factor analysis. Instead, the repeated outcome measures are used to construct the growth factors to represent the shapes of the individual growth trajectories.

LATENT GROWTH MODELING (LGM) FOR LONGITUDINAL DATA ANALYSIS

y0i

y1i

1

1

y2i

1

1

1 1

y3i

0

1

3

4

y5i

5

η1i

η0i ζ0i Figure 4.1

2

y4i

179

ζ1i

Linear unconditional latent growth model.

initial level of the outcome measure under study. Different centering points can be set up; that is, the time score can be set to 0 at any time point, depending on how the centering point is interpreted. For example, by setting the time score to 0 at the last time point (t5 in our example), resulting in time scores like [−5, −4, −3, −2, −1, 0], we define the end of the observation period as the centering point of the growth process. As such, the estimated latent intercept growth factor would represent the level-of-outcome measure at the end of the observation period. (iii) They define the scaling of the growth factors (Stoolmiller 1995). Very often, the scale of a time score is matched with the observed time scale, and the time scores are specified on an a priori basis according to the hypothesized pattern-of-growth trajectory. When the time intervals of the repeated outcome measures are unequal, the time scores can be specified accordingly to match the observed time points (Chan and Schmitt 2000; Muthén and Muthén 1998–2017). Suppose y0 –y5 were measured at the baseline, and then measured 1, 2, 3, 4, and 6 months after the baseline; the time scores we need to specify in the model would be [0, 1, 2, 3, 4, 6] instead of [0, 1, 2, 3, 4, 5]. The slope growth factor 𝜂 1i represents the amount of the predicted outcome change corresponding to one unit of change on the observed time scale (i.e. from time points tk to t(k+1) ). However, time scores can also be model estimated; as such, the outcome growth trajectory is determined by data. As a matter of fact, LGM can be considered an application of multilevel modeling in the framework of SEM. The simple unconditional linear latent growth model depicted in Figure 4.1 can be described in the following equations: yti = 𝜂0i + 𝜂1i 𝜆t + 𝜀ti

(4.1)

𝜂0i = 𝜂0 + 𝜍0i

(4.2)

𝜂1i = 𝜂1 + 𝜍1i

(4.3)

where Eq. (4.1) is a within-subject model in which yti is the ith observed outcome measure at time point t; the two latent growth factors 𝜂 0i and 𝜂 1i are two random coefficients and the 𝜆t values are time scores. The residual term 𝜀ti in Eq. (4.1) is a composite error term at time tt , representing both random measurement error

180

STRUCTURAL EQUATION MODELING

and the time-specific influence of the ith individual. Equations (4.2) and (4.3) are between-subject models, in which the two random coefficients (𝜂 0i and 𝜂 1i ) in Eq. (4.1) serve as dependent variables, where 𝜂 0 represents the model estimated overall mean level of the initial outcome, 𝜂 1 is the average rate of outcome change over time, and 𝜍 0i and 𝜍 1i are error terms representing between-subject variations in regard to the outcome growth trajectory. The within- and between-subject models can be combined into a mixed model by substituting Eqs. (4.2) and (4.3) into Eq. (4.1): yti = 𝜂0 + 𝜂1 𝜆t + (𝜍0i + 𝜆t 𝜍1i + 𝜀ti )

(4.4)

where the observed repeated outcome measures yti consists of the fixed (𝜂 0 + 𝜆t 𝜂 1 ) and random (𝜍 0i + 𝜆t 𝜍 1i + 𝜀ti ) components of the growth trajectories. The fixed component is the predicted values of outcome yti at a specific time point tt . The variation of the growth trajectory is partitioned into between- and within-subject variations, and presented as three unobserved sources of variations: 𝜍 0i – between-subject variation in the initial level of outcome measure; 𝜆t 𝜍 1i – between-subject variation in the rate of outcome change; and 𝜀ti – within-subject variation in repeated outcome measures. The random components of the growth trajectory capture the variations in individual trajectories over time as well as across individuals. The covariance between 𝜍 0i and 𝜍 1i shows the association between the initial outcome level and the rate of outcome change over time. Causal effects of the initial outcome level on the rate of outcome change can be readily estimated in the model. In the following, we use real-world data to demonstrate how to run a linear latent growth model. The data used for modeling are a sample of crack cocaine users (N = 430) who participated in an area health services research project in the mid-1990s in Ohio. The project employed a natural history research design to study stability and change in substance abuse, health status, health service utilization, as well as the relationship between trajectories of drug use and health status over time. Participants who were eligible to be included in the project: (i) had recently used crack cocaine, (ii) were at least 18 years of age, (iii) provided an address for a residence that was not a homeless shelter, (iv) were not involved in a formal drug abuse treatment program, (v) had no criminal charges pending, and (vi) had never injected drugs. Written informed consent was obtained from all participants following a protocol that was approved by Wright State University’s Institutional Review Board (Dayton, Ohio). The sample consisted of 262 men and 168 women, with the majority of the sample being black (61.9%). More than 60% of the participants had a high school or college education. The mean age of the sample was 37.3 years (Siegal et al. 2002). The outcome measure used for model demonstration is depression status, which was assessed using the Beck Depression Inventory-II (BDI-II, Beck and Steer 1993), one of the most widely used instruments for measuring the severity of depression.2 2 The BDI is a 21-item multiple-choice self-report inventory in which each of the items reflects a single depression symptom or attitude. Response to each item is scored on a four-point scale with values of 0–3.

LATENT GROWTH MODELING (LGM) FOR LONGITUDINAL DATA ANALYSIS

181

The BDI-defined depression score was constructed by summing up values of all 21 BDI items (Beck and Steer 1993). Six repeated measures of BDI scores (y0i –y5i ), assessed at baseline and 5 follow-ups with a 6-month interval between interviews, were used to model the growth trajectory of depression among the crack cocaine users during a 30-month observation period. Mplus Program 4.1 TITLE: Unconditional Linear Latent Growth Model (LGM) DATA: FILE = Crack_BDI2.dat; VARIABLE: NAMES = PID Ethnic Gender Age Educ z0-z5 y0-y5 ts0-ts5; MISSING = ALL (-9); USEVAR = y0-y5; ANALYSIS: ESTIMATOR = MLR; MODEL: eta0 eta1 | y0@0 y1@1 y2@2 y3@3 y4@4 y5@5; OUTPUT: SAMPSTAT PATTERNS TECH1; PLOT: TYPE = PLOT3; Series = y0-y5(*);

The program read raw data from the data file Crack_BDI2.dat stored in the folder where the Mplus program file is saved. Variables z0 –z5 and y0 –y5 are crack cocaine use frequency and BDI scores measured at five time t0 –t5 , respectively. In this model, y0 –y5 are the outcome measures of interest. Missing values were coded −9 in the data and are specified in the VARIABLE command, and the pattern of missing data is checked by specifying PATTERNS option in the OUTPUT command. Dealing with missing data is always a challenge in longitudinal studies. The most common and plausible assumption for missing data is missing at random (MAR) (Foster and Fang 2004; Hedeker and Gibbons 2006). MAR allows missingness depend on observed values (either outcome measures or covariates), but not on the unobserved values (Little and Rubin 2002). The maximum likelihood (ML) estimator is used in Mplus in conjunction with the full information maximum likelihood (FIML) approach to deal with missing values assuming MAR. FIML utilizes all the information of the observed data and maximizes the likelihood of the model given the observed data (Finkbeiner 1979). FIML is more efficient and less biased than the traditional approaches (e.g. listwise deletion, pairwise deletion, or mean imputation methods) (Arbuckle 1996; Little and Rubin 1987; Wothke 2000). In the MODEL command, the latent intercept and slope growth factors are defined as eta0 and eta1, respectively, on the left side of the | symbol. The repeated outcome measures and the time scores for the growth model are specified on the right side of the | symbol. All the loadings to the intercept factor are automatically set to 1; and the time scores (or the loadings to the latent slope growth factor) are fixed at 0, 1, 2, 3, 4, and 5, specifically, to define a linear growth model with equidistant time points. The 0 time score for the slope growth factor at time t0 defines the baseline time point as the centering point, so the latent intercept growth factor represents the initial level of the outcome measure. The robust maximum likelihood (MLR) estimator is used for model estimation. By default, the mean and covariance structures (MACS) are analyzed.

182

STRUCTURAL EQUATION MODELING

The PLOT command at the end of the Mplus program plots the growth curve of the linear latent growth model. Selected model results are shown in Table 4.1. In the upper panel of Table 4.1 are missing data patterns. For example, of the 430 total cases in the sample, 271 cases in Pattern 1 did not miss a single interview; 17 cases in Pattern 2 missed the last follow-up interview, and so on. Table 4.1 Selected Mplus output: unconditional linear LGM. … MISSING DATA PATTERNS (x = not missing) 1 x x x x x x

Y0 Y1 Y2 Y3 Y4 Y5

2 x x x x x

3 x x x x

4 x x x x

x

5 x x x

6 x x x

x x

x

7 x x x

x

8 x x x

9 10 11 12 13 14 15 16 17 18 19 20 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

21 22 23 24 25 26 x x x x x x

Y0 Y1 Y2 Y3 Y4 Y5

x x x x

x x

x x

x

MISSING DATA PATTERN FREQUENCIES Pattern 1 2 3 4 5 6 7 8 9

Frequency 271 17 11 17 8 1 2 11 8

Pattern 10 11 12 13 14 15 16 17 18

Frequency 3 1 2 1 3 14 11 4 1

Pattern 19 20 21 22 23 24 25 26

… MODEL FIT INFORMATION Number of Free Parameters

11

Loglikelihood H0 Value H0 Scaling Correction Factor for MLR H1 Value H1 Scaling Correction Factor for MLR

-7570.433 1.336 -7538.495 1.300

Frequency 2 1 1 1 1 1 1 36

LATENT GROWTH MODELING (LGM) FOR LONGITUDINAL DATA ANALYSIS

Table 4.1

183

(continued)

Information Criteria Akaike (AIC) Bayesian (BIC) Sample-Size Adjusted BIC (n* = (n + 2) / 24)

15162.866 15207.567 15172.660

Chi-Square Test of Model Fit Value Degrees of Freedom P-Value Scaling Correction Factor for MLR

50.117* 16 0.0000 1.275

… RMSEA (Root Mean Square Error Of Approximation) Estimate 90 Percent C.I. Probability RMSEA