Solved – McDonald’s Omega: Assumptions, Coefficients and Interpretation

psychometricsreliability

After browsing cross validated and several other sources on the web, I still cannot get a grip on McDonald's Omega as a measure of internal consistency. I have a hunch that many fellow social scientists feel similarly insecure about the measure, so I hope to get some clarification on several aspects on this measure:

Assumptions / Prerequisites

While the assumptions for Cronbach's Alpha are commonly discussed (e.g. Cronbach Alpha Assumptions), I haven't managed to get a full picture of the prerequisites for McDonald's Omega. My questions being:

  • What are the general assumptions underlying Omega?
  • Is there a rule of thumb regarding sample size, or a ratio between variables and observations that should be considered?
  • Is Cronbach's Alpha superior to Omega under any circumstances at all?

Coefficients and Interpretation

Secondly, it appears that there still is a great deal of confusion around the different Omega coefficients, perhaps most notably returned by the psych-package in R. For clarification, maybe someone could offer a full interpretation of coefficients in the following example, in ?psych::omega,

library(psych)
#create 9 variables with a hierarchical structure
v9 <- sim.hierarchical()

#find omega 
v9.omega <- omega(v9,digits=2)

> v9.omega$omega.group
        total   general     group
g   0.7984002 0.6857363 0.1126608
F1* 0.7449332 0.6034008 0.1415325
F2* 0.6303512 0.4034189 0.2269323
F3* 0.5022309 0.2460886 0.2561423

> v9.omega$omega.lim
[1] 0.858888

My questions regarding this example:

  • How does the interpretation between omega.tot and omega_h (general) differ in this example? Or: What would the correct global measure of internal consistency for the entire measure/questionnaire be?
  • What is group telling us?
  • When is omega.lim relevant?

In addition: It appears that omega_h (general) is getting the most attention in posts/reports, but these values always strike be as surprisingly low in almost every example I have seen. How come?

Thanks

Best Answer

The topic is old, but the questions interesting, so I would like to include some of available information regarding some questions.

Statistical requirements/assumptions underlying Omega:

Omega and omega hierarchical are based on parameter estimates (i.e., estimates of factor loadings and factor variances) that are derived for a certain CFA model. Hence, two vital statistical requirements need to be fulfilled: (1) Proper interpretation of omega and omega hierarchical requires that the target model fits the empirical data well (2) Parameter estimates need to be precise Brunner, Nagy, Wilhelm, 2012

Rule of thumb regarding sample size, or a ratio between variables and observations that should be considered?

Similarly, the sample size should follow the CFA sample size definition, using preferably simulation methods, as those enabled by R simsem package.

sample size needs to be sufficiently large to obtain trustworthy estimates of model parameters (Yang & Green, 2010).5 In general, a larger sample size is always better, and a sample size of N 200 allows proper estimation of model parameters (e.g., nonnegative variances of subtest-specific factors) under a large variety of conditions (Boomsma & Hoogland, 2001). There is also growing consensus that the required sample size depends on the properties of the model investigated and the data to be analyzed: A higher ratio of measures per factor and higher factor loadings may compensate for smaller sample size (Marsh, Hau, Balla, & Grayson, 1998; Yang & Green, 2010). Thus, methodologists strongly encourage applied researchers to conduct Monte Carlo studies of the target CFA models to determine the required sample size (L. K. Muthén & Muthén, 2002).Brunner, Nagy, Wilhelm, 2012

An interesting reference for this discussion is available on: Brunner M, Nagy G, Wilhelm O. A tutorial on hierarchically structured constructs. J Pers. 2012;80(4):796-846. doi:10.1111/j.1467-6494.2011.00749.x