Background
The different fit indices tend to be sensitive to different forms of misspecification. Looking at the formulae, where $T$ is the model statistic, $df$ the degrees of freedom, and subscripts indicate baseline versus target:
$$
TLI = \frac{(T_b / df_b) - (T_t / df_t)}{(T_b / df_b) - 1}
$$
$$
SRMR = \sqrt{\left( \frac{2\Sigma_{i=1}^{p}\Sigma_{j=1}^{i} \left[ \left(\frac{s_{ij} - \hat{\sigma}_{ij}}{s_{ii}s_{jj}}\right)^2
\right]}{p(p+1)} \right)}
$$
One of the keys here is that things like the TLI and CFI (not shown) incorporate the degrees of freedom, which means simpler models but that do fairly well will be preferred. The SRMR does not. There is no benefit from a more parsimonious model.
It is perhaps not surprising then that the different fit indices tend to be sensitive to different types of model misspecification.
A further hint is the squared term --- particular covariances misspecified will contribute much more (dropping off quadratically at the residual nears zero).
Latent Growth Models
Turning to latent growth models, the form is very specific and many parameters are fixed. You have a piecewise model, but still you have many time points (15) and are only fitting two lines. It is not at all surprising to me that there is some misspecification here. The CFI/TLI are likely relatively good because of how parsimonious the model is. That is a good sign, but the SRMR is disturbingly high. It may not change your parameters of interest much, but I would definitely want to at least figure out what part of the model was misspecified.
Suggestions
The tools to determine and correct model misspecification are basically the same regardless of the problem. That is perhaps an oversimplification, but not by much.
In your case, you do not have a measurement issue (that is, you do not need to examine whether there should be alternate factors or different groupings of the items per se); however, it may be unreasonable to assume linear growth, even piecewise linear growth.
Another common area of misspecification with growth models is of the error structure. It is possible, perhaps even likely that residuals will be more highly correlated with near time points than with those farther away in time. If there is some cyclical pattern to the assessments, those may also play a role (e.g., seasons, times of day, days of week, etc.).
Examine the standardized residual covariances --- which ones are high? What happens if you add a residual covariance to account for that? Consider relaxing the linear time constraint. You could try quadratic time or freely estimate it. You can try modification indices to see "automated" suggestions for how to improve your model.
If all of that is seeming too complex or variable. Try simplifying your model. Rather than doing a piecewise model, fit a model to just the first piece (ignore the second piece and leave it out for now). Make sure that your growth models are solid for each of the pieces before combining models for all 15 time points. The same approaches I described can be used with the individual pieces. What happens if the individual pieces fit great, but combined they do not? This suggests it is the relations between pieces that are being misspecified---what time points from the first or second are more likely highly related? What is going on around the measures you may need to account for either in the functional form (linear etc.) or with residual covariances? At each step you can use the residual covariance matrices, modification indices, your own theoretical judgement, examination of the raw correlation matrices, and data visualization to help get a handle on these things.
Regarding your first question, part 1:
Linear regression is "just-identified" in SEM. This is also called "fully-saturated."
A more simple example with 2 IVs and 1 DV gives:
3 variances and 3 covariances in the covariance matrix. This is your DF for SEM = 6.
Your regression includes 2 regression beta coefficients, 2 IV variances, 1 covariance between the IVs (you may or may not realize this is in the model, but it is), and 1 error variance = 6 parameters
6 DF = 6 Parameters
Unless constraints are made, regression models in SEM are always fully saturated and no assessment of model fit is possible.
Regarding Part 2:
I agree with Patrick that these are nested models and you can "test" the constraints with a chi2 test.
Best Answer
This is one of a class of more general questions in SEM about how fit indices are calculated - it's not just relevant to Mplus.
The incremental fit indices (CFI, etc) all work by comparing the fitted model chi-square with the null model chi-square. They are not hard to work out. For example:
$$ CFI = (\chi^2_0 - \chi^2_m) / \chi^2_0 $$
TLI/NNFI includes some degree of freedom calculations.
The issue is that there is not consensus on how the null model should be estimated. AMOS and EQS, for example, fix correlations between exogenous measured variables for the null model to zero. Mplus does not. LISREL uses a different chi-square to calculate the null model than Mplus does.
If you fit a parallel factor model with equality constraints on loadings and errors, your null model $\chi^2$ can be better than your fitted model $\chi^2$ - but the null model is supposed to be the worst model that there is.
In short, if you have any doubt, don't trust your incremental fit indices - work them out. There's a paper on this: Widaman, K. F., & Thompson, J. S. (2003). On specifying the null model for incremental fit indices in structural equation modelling. Psychological Methods, 8(1), 16-37.
RMSEA is based only on chi-square. If you're worried, work it out.
$RMSEA = \sqrt((\chi^2_m - df)/(df(N-1))$
However, if when you shift from regular ML to a robust estimator, if RMSEA changes, you can be pretty sure that it's using the robust estimator.
One interesting place that Mplus does NOT change is in the calculation of the relative fit indices - such as AIC. Mplus does not use the robust $\chi^2$ in the calculation of AIC, it uses the ML version. I'm not sure about other programs, but that's not the behavior I'd have expected. (Because it uses the log likelihood, rather than the $\chi^2$, to calculate AIC).