1. Go back to Exploratory Factor Analysis
If you're getting very bad CFA fits, then it's often a sign that you have jumped too quickly to CFA. You should go back to exploratory factor analysis to learn about the structure of your test. If you have a large sample (in your case you don't), then you can split your sample to have an exploratory and a confirmatory sample.
- Apply exploratory factor analysis procedures to check whether the theorised number of factors seems reasonable. I'd check the scree plot to see what it suggests. I'd then check the rotated factor loading matrix with the theorised number of factors as well as with one or two more and one or two less factors. You can often see signs of under or over extraction of factors by looking at such factor loading matrices.
- Use exploratory factor analysis to identify problematic items. In particular, items loading most on a non-theorised factor, items with large cross-loadings, items that don't load highly on any factor.
The benefits of EFA is that it gives a lot of freedom, so you'll learn a lot more about the structure of the test than you will from only looking at CFA modification indices.
Anyway, hopefully from this process you may have identified a few issues and solutions. For example, you might drop a few items; you might update your theoretical model of how many factors there are and so on.
2. Improve the Confirmatory Factor Analysis Fit
There are many points that could be made here:
CFA on scales with many items per scale often perform poorly by traditional standards. This often leads people (and note I think this response is often unfortunate) to form item parcels or only use three or four items per scale. The problem is that typically proposed CFA structures fail to capture the small nuances in the data (e.g., small cross loadings, items within a test that correlate a little more than others, minor nuisance factors). These are amplified with many items per scale.
Here are a few responses to the above situation:
- Do exploratory SEM that allows for various small cross-loadings and related terms
- Examine modification indices and incorporate some of the largest reasonable modifications; e.g., a few within scale correlated residuals; a few cross-loadings. see
modificationindices(fit)
in lavaan
.
- Use item parcelling to reduce the number of observed variables
General comments
So in general, if you're CFA model is really bad, return to EFA to learn more about your scale. Alternatively if your EFA is good, and your CFA just looks a little bad due to well known problems of having many items per scale, then standard CFA approaches as mentioned above are appropriate.
I actually do not think you have conducted CFAs, as you think you have, for your second and third models. Instead, for a couple of reasons, it reads as though you have just conducted three separate EFAs. For one, you mention the term "orthogonal"--a rotation method type--and factor scores, but rotation and factor scores are only features of EFA, not CFA. And in CFA, if you fit a model specifying three uncorrelated factors, the estimated correlations of those models would in fact be zero, and if the factors were correlated, this specification would worsen the fit of your model.
With that, there is still the question of why your estimated factor correlations and factor score correlations are changing from model to model. You actually have identified the likely cause of these discrepancies yourself:
In particular, in model 2, the 3-factor orthogonal CFA assumes no correlation between factors, yet the factor scores are correlated.
Orthogonal rotation methods assume factors are uncorrelated; orthogonal methods do not make factors uncorrelated (Fabrigar & Wegener, 2011). Thus, when using this rotation method, you could still end up with factors that are correlated when you somehow estimate their correlations (e.g., as you did using factor scores). But if factors are truly correlated, and you assume no correlation, the true shared variance between factors needs to go somewhere, so it ends up getting suppressed back down to the factor loadings (Osborne, 2015). Lay the factor matrix of your orthogonal solution next to the pattern matrix of your oblique solution (i.e., with correlations estimated); I'm willing to bet you will see higher "cross-loadings" with the former than with the latter. Put another way, your orthogonal solution will exhibit worse "simple structure" (Fabrigar & Wegener, 2011).
The end result is that the factor scores from your orthogonal and oblique models are computed using fairly different factor loading estimates, and the orthogonal solution suppresses the correlations between factors. So you shouldn't be surprised that the oblique rotation factor scores show stronger correlations.
The reason your factor score correlations from your oblique solution differ from the estimated factor correlations from the same solution is a bit complicated, but ttnphns comment above is a good summary--the factor scores are only approximations, and therefore their correlations are only approximations, whereas the estimated correlations are based on the unobserved error-free latent variables from the EFA (see DeStefano, Zhu, & Mîndrilă, 2009; Grice, 2001 for more details on the nature of factor scores).
References
DeStefano, C., Zhu, M., & Mîndrilă, D. (2009). Understanding and using factor scores: Considerations for the applied researcher. Practical Assessment Research & Evaluation, 14, 1-11.
Fabrigar, L. F., & Wegener, D. T. (2011). Exploratory factor analysis. New York, NY: Oxford.
Grice, J. W. (2001). Computing and evaluating factor scores. Psychological Methods, 6, 430-450.
Osbourne, J. W. (2015). What is rotating in exploratory factor analysis? Practical Assessment Research & Evaluation, 20, 1-7.
Best Answer
When you have ordinal data, and a relatively complex model the traditional mls often will not converge. If that is what is happening use dwls.