Solved – Possible reasons for changing P value significance after adjusting other covariate in a multiple regression

confoundingmulticollinearityp-valuerregression

Take any multiple regression model as an example, p value change from significant to non-significant or the opposite, after adjusting for one or several covariates in the model. My statistical teacher said the reason is because of potential collinearity, which is discussed quite often among people. But there could be other factors such as negative confounder, etc. Above all, I would like a good summary of the causes of changes in significance tests for individual predictors in multiple regression models, preferably illustrating with an example performed in R. For instance, simulating some data for each of reason, etc. Thanks in advance.

Best Answer

The significance of the model is not affected by collinearity except in the trivial sense that adding redundant variables will increase the df numerator (and decrease df denominator) without increasing the SSQ numerator or decreasing the SSQ denominator appreciably. For individual predictiors, collinearity means confounded variance and when variance is confounded it is difficult to reach strong conclusions about individual effects. Mathematically, this is reflected in the standard errors of the regression weights (they will be high with collinearity).

Related Question