Solved – Multiple Regression in SPSS: Insignificant coefficients, significant F-statistic, no multicollinearity

multiple regressionspss

To give you some background, my thesis is looking at the relationship between 3 basic psychological needs (competence, autonomy, and relatedness) to 3 emotion regulation (ER) strategies (concealing, adjusting and tolerating). I have concluded with my supervisors that 3 multiple regression tests would be appropriate, where for each one there would be a different ER strategy as a predictor, and the psychological needs as regressors.

I have conducted the multiple regressions for each predictor, however, am confused by the SPSS outputs.

For my first test, for Concealing, my F-statistic is significant, (p = 0.000), however, 2 of the 3 predictors' T-tests are quite insignificant (p = 0.384, p = 0.618).

How can this be possible, if the F test shows that the model is powerful in predicting the outcomes, but the T-tests say otherwise? I thought that this could be due to multicollinearity, however, the VIF's are below 10.

Concealing Regression

When I do the test for the second ER strategy, none of the predictors are significant however the F-statistic still is, and VIFs are below 10. For the third ER strategy, tolerating, both the F-statistic and predictors are insignificant.

Adjusting Regression

Tolerating Regression

Attached are screenshots of what I am speaking about. Does this all simply mean my data doesn't explain anything? Or have I misinterpreted something?

I hope this makes sense and would really appreciate any guidance! Stats is not really my thing 🙂

Best Answer

Two things to keep in mind with MR analyses: First, the VIF cut-off of 10 is to flag extreme variance inflation. You have VIF values ≈2.5, which would result in $R^2 ≈ 0.6$...this means 60% of the variance is shared amongst those variables. Thus, there is some amount of multicollinearity at play here. Second, the p-values reported by SPSS for each variable in an MR model are to assess the significance of if adding that one variable to the model with all but that variable produces a statistically significant change in the amount of variance of the predicted variable explained. (That is a very long sentence, I realize.) In other words,
• calculate $R^2$ for the entire model;
• drop the variable on the current row;
• recalculate the MR and get the smaller $R^2$;
• assess the statistical significance of $H_0: \Delta R^2=0$; and
• repeat for all of the variables.

What may be more useful to you is to use a Type-I $\Delta R^2$ calculation for the model (this requires entering the variables in the model in a fixed and specific order).

Happy to share more if it will help.

Related Question