Background
The different fit indices tend to be sensitive to different forms of misspecification. Looking at the formulae, where $T$ is the model statistic, $df$ the degrees of freedom, and subscripts indicate baseline versus target:
$$
TLI = \frac{(T_b / df_b) - (T_t / df_t)}{(T_b / df_b) - 1}
$$
$$
SRMR = \sqrt{\left( \frac{2\Sigma_{i=1}^{p}\Sigma_{j=1}^{i} \left[ \left(\frac{s_{ij} - \hat{\sigma}_{ij}}{s_{ii}s_{jj}}\right)^2
\right]}{p(p+1)} \right)}
$$
One of the keys here is that things like the TLI and CFI (not shown) incorporate the degrees of freedom, which means simpler models but that do fairly well will be preferred. The SRMR does not. There is no benefit from a more parsimonious model.
It is perhaps not surprising then that the different fit indices tend to be sensitive to different types of model misspecification.
A further hint is the squared term --- particular covariances misspecified will contribute much more (dropping off quadratically at the residual nears zero).
Latent Growth Models
Turning to latent growth models, the form is very specific and many parameters are fixed. You have a piecewise model, but still you have many time points (15) and are only fitting two lines. It is not at all surprising to me that there is some misspecification here. The CFI/TLI are likely relatively good because of how parsimonious the model is. That is a good sign, but the SRMR is disturbingly high. It may not change your parameters of interest much, but I would definitely want to at least figure out what part of the model was misspecified.
Suggestions
The tools to determine and correct model misspecification are basically the same regardless of the problem. That is perhaps an oversimplification, but not by much.
In your case, you do not have a measurement issue (that is, you do not need to examine whether there should be alternate factors or different groupings of the items per se); however, it may be unreasonable to assume linear growth, even piecewise linear growth.
Another common area of misspecification with growth models is of the error structure. It is possible, perhaps even likely that residuals will be more highly correlated with near time points than with those farther away in time. If there is some cyclical pattern to the assessments, those may also play a role (e.g., seasons, times of day, days of week, etc.).
Examine the standardized residual covariances --- which ones are high? What happens if you add a residual covariance to account for that? Consider relaxing the linear time constraint. You could try quadratic time or freely estimate it. You can try modification indices to see "automated" suggestions for how to improve your model.
If all of that is seeming too complex or variable. Try simplifying your model. Rather than doing a piecewise model, fit a model to just the first piece (ignore the second piece and leave it out for now). Make sure that your growth models are solid for each of the pieces before combining models for all 15 time points. The same approaches I described can be used with the individual pieces. What happens if the individual pieces fit great, but combined they do not? This suggests it is the relations between pieces that are being misspecified---what time points from the first or second are more likely highly related? What is going on around the measures you may need to account for either in the functional form (linear etc.) or with residual covariances? At each step you can use the residual covariance matrices, modification indices, your own theoretical judgement, examination of the raw correlation matrices, and data visualization to help get a handle on these things.
1. Go back to Exploratory Factor Analysis
If you're getting very bad CFA fits, then it's often a sign that you have jumped too quickly to CFA. You should go back to exploratory factor analysis to learn about the structure of your test. If you have a large sample (in your case you don't), then you can split your sample to have an exploratory and a confirmatory sample.
- Apply exploratory factor analysis procedures to check whether the theorised number of factors seems reasonable. I'd check the scree plot to see what it suggests. I'd then check the rotated factor loading matrix with the theorised number of factors as well as with one or two more and one or two less factors. You can often see signs of under or over extraction of factors by looking at such factor loading matrices.
- Use exploratory factor analysis to identify problematic items. In particular, items loading most on a non-theorised factor, items with large cross-loadings, items that don't load highly on any factor.
The benefits of EFA is that it gives a lot of freedom, so you'll learn a lot more about the structure of the test than you will from only looking at CFA modification indices.
Anyway, hopefully from this process you may have identified a few issues and solutions. For example, you might drop a few items; you might update your theoretical model of how many factors there are and so on.
2. Improve the Confirmatory Factor Analysis Fit
There are many points that could be made here:
CFA on scales with many items per scale often perform poorly by traditional standards. This often leads people (and note I think this response is often unfortunate) to form item parcels or only use three or four items per scale. The problem is that typically proposed CFA structures fail to capture the small nuances in the data (e.g., small cross loadings, items within a test that correlate a little more than others, minor nuisance factors). These are amplified with many items per scale.
Here are a few responses to the above situation:
- Do exploratory SEM that allows for various small cross-loadings and related terms
- Examine modification indices and incorporate some of the largest reasonable modifications; e.g., a few within scale correlated residuals; a few cross-loadings. see
modificationindices(fit)
in lavaan
.
- Use item parcelling to reduce the number of observed variables
General comments
So in general, if you're CFA model is really bad, return to EFA to learn more about your scale. Alternatively if your EFA is good, and your CFA just looks a little bad due to well known problems of having many items per scale, then standard CFA approaches as mentioned above are appropriate.
Best Answer
A few suggestions/clarifications before directly addressing your questions.
First, a significant $\chi^2$ doesn't indicate poor fit; it indicates that you have rejected the null of perfect fit (i.e., $\sum$ = $S$). Kline (2015) aside, most SEM specialists (e.g., Brown, 2006; Finch & French, 2015; Hu & Bentler, 1995; Little, 2013; MacCallum & Austin, 2000; West et al. 2012) do not seem to recommend that you put a great deal of stock in this test, if only (amongst other reasons) because many find the null hypothesis a pretty ridiculous aspiration.
Secondly, be careful not to get sucked into specifying correlated error terms willy-nilly, just on account of the mod indexes. Many reviewers are savvy enough to see this for what it is: post-hoc fit chasing. If you have good a priori reason for specifying these, great, but if not, you might reconsider (and as you will see, your fit might not be as bad as you originally feared).
Now to your questions:
References
Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16(3), 397-438.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford Press.
Finch, W. H., & French, B. F. (2015). Latent variable modeling with R. New York, NY: Routledge.
Hu, L., & Bentler, P. M. (1995). Evaluating model fit. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 76-99). Thousand Oaks, CA: Sage.
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55.
Kline, R. B. (2015). Principles and practice of structural equation modeling. New York, NY: Guilford Press.
Little, T. D. (2013). Longitudinal structural equation modelling. New York, NY: Guilford Press.
MacCallum, R. C., & Austin, J. T. (2000). Applications of structural equation modeling in psychological research. Annual Review of Psychology, 51(1), 201-226.
McNeish, D., An, J., & Hancock, G. R. (2018). The thorny relation between measurement quality and fit index cutoffs in latent variable models. Journal of Personality Assessment, 100(1), 43-52.
West. S. G., Taylor, A. B., & Wu, W. (2012). Model fit and model selection in structural equation modeling. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 209-231). New York, NY: Guilford Press.