Your original model:
$Y_{si} = \beta_0 + S_{0s} + (β_{1} + S_{1s})X_{1si} + β_{2}X_{2si} + β_{3}X_{3si} + β_{4}X_{1si}X_{2si} + β_{5}X_{2si}X_{3si} + \epsilon_{si}$ where $s = 1,..., S$, indicates the subject, $i=1,..I_s$ indicates the measurement, $X_{1si}$ is day of year, $X_{2si}$ is factor and $X_{3si}$ = temperature, $\epsilon_{si} ~ N(0, σ^2)$
and $(S_{0s} S_{1s})'= N\left((0,0)', \left(\matrix{\sigma_1^2& \sigma_{12}\\ \sigma_{12}&\sigma_2^2}\right)\right)$. $\beta_0,...\beta_5$ are fixed effects.
For $X_{1si}$, it is 1 for Jan 1, xxxx, and 365 (or 366) for Dec 31, xxxx? If it is true, maybe periodic function is needed, or need to drop it, because the difference between means of $Y{si}$ at Jan 1, 2016 and Dec 31, 2015 is $365\beta_1$ and it may be not true.
I think your random slope should be on $X_{3si}$, instead of on $X_{1si}$
Maybe you can fit a model like this
$Y_{si} = \beta_0 + S_{0s} + β_{1}X_{1si} + β_{2}X_{2si} + (β_{3}+S_{3s})X_{3si} + β_{4}X_{1si}X_{2si} + β_{5}X_{2si}X_{3si} + \epsilon_{si}$
Obviously, it is an exploratory analysis. You need to find the model that fit the data. My experience is fit several fixed effect models (linear models) with temperature alone and with other covariates, even the interactions. If you cannot find any model as you expect, maybe your theory is incorrect. If you find what you want, try to add the random effects in the model, such that the final model will be more reasonable.
In mixed model (in matrix),
$Y = X\beta + Z\gamma + \epsilon$, where $\gamma ~ N(0, G)$ and $\epsilon ~ N(0,R)$.
For a given $X$, the variance-covariance of $Y$ is
$Var(Y) = ZGZ'+R$
Generally, we are not interesting in the random effect, instead we want to estimate the fixed effect $\beta$. The purpose of including random effect in the model is to make sure the model is more suitable to the real situation when the correlation exists among the response variable. If $Z$ has many columns with complicated structure, it is difficult to figure out what $ZGZ'$ looks like. It means you do not know what model you are fitting. Theoretically, you can have many continue variables in $Z$, but in practice, it is difficult to explain when you have two or more continue variables in $Z$.
Another method is get rid of random effect, and specify the variance-covariance matrix directly though $R$. When the variance-covariance structure is clear, this method is better than random effect.
In your case, if you think that temperature has effect on the correlation, for example, the two measurements from the same subject have higher correlation if the the temperatures are close, you can specify the $R$ though difference of the temperature, such as $\rho^{|t_i-t_j|}$.
Best Answer
1) Time. There's a main effect of time. There are one or more pairs among the levels that are significantly different from one another (although you could have a main effect and none of the pairs are different).
Time * group is the test of interaction. One simple way to describe is that the differences between the experimental and control groups are unequal at each level of the time variable. So the difference between experimental and control is different in the pre-test than the post-test. That's just one outcome though. There may be others.
The between-subjects test result means that when you compare the mean for the experimental to the mean for controls, hey are not different.
Typically you should interpret the interaction over the main effects especially when they are disordinal. That is, one level is higher across some or one levels of the other variable and equal or lower across the other(s).
2) If I recall correctly, you can usually ignore the Intercept, which tests whether the grand mean is = zero.
3) I don't know what software you're using so I have no idea.in SPSS you could set up contrasts.