Solved – Why could centering independent variables change the main effects with moderation

centeringinteractionregression

I have a question related to multiple regression and interaction, inspired by this CV thread: Interaction term using centered variables hierarchical regression analysis? What variables should we center?

When checking for a moderation effect I do center my independent variables and multiply the centered variables in order to calculate my interaction term. Then I run my regression analysis and check for main and interaction effects, which may show the moderation.

If I redo the analysis without centering, apparently the coefficient of determination ($R^2$) does not change but the regression coefficients ($\beta$s) do. That seems clear and logical.

What I do not understand: The p-values of the main effects change substantially with centering, although the interaction does not (which is right). So my interpretation of main effects could change dramatically – just determined by centering or not. (It's is still the same data, in both analyses!)

Can somebody clarify? – Because that would mean that the option to center my variables would be mandatory and everybody should do it in order to get the same results with the same data.


Thanks a lot for distributing to that problem and your comprehensive explanations. Be assured that your help is very much appreciated!

For me, the biggest advantage of centering is to avoid multicollinearity. It is still quite confusing to establish a rule, whether to center or not. My impression is, that most of the resources suggest to center, although there are some "risks" when doing it.
Again I want to put out the fact, that 2 researchers dealing with the same material and data might conclude different results, because one does centering and the other does not. I just read some part of a book by Bortz (he was a Professor and kind of a Statistics Star in Germany and Europe), and he does not even mention that technique; just points out to be careful in interpreting main effects of variables when they are involved in interactions.

After all, when you conduct a regression with one IV, one moderator (or second IV) and a DV, would you recommend to center or not?

Best Answer

In models with no interaction terms (that is, with no terms that are constructed as the product of other terms), each variable's regression coefficient is the slope of the regression surface in the direction of that variable. It is constant, regardless of the values of the variables, and therefore can be said to measure the overall effect of that variable.

In models with interactions, this interpretation can be made without further qualification only for those variables that are not involved in any interactions. For a variable that is involved in interactions, the "main-effect" regression coefficient -- that is, the regression coefficient of the variable by itself -- is the slope of the regression surface in the direction of that variable when all other variables that interact with that variable have values of zero, and the significance test of the coefficient refers to the slope of the regression surface only in that region of the predictor space. Since there is no requirement that there actually be data in that region of the space, the main-effect coefficient may bear little resemblance to the slope of the regression surface in the region of the predictor space where data were actually observed.

In anova terms, the main-effect coefficient is analogous to a simple main effect, not an overall main effect. Moreover, it may refer to what in an anova design would be empty cells in which the data were supplied by extrapolating from cells with data.

For a measure of the overall effect of the variable that is analogous to an overall main effect in anova and does not extrapolate beyond the region in which data were observed, we must look at the average slope of the regression surface in the direction of the variable, where the averaging is over the N cases that were actually observed. This average slope can be expressed as a weighted sum of the regression coefficients of all the terms in the model that involve the variable in question.

The weights are awkward to describe but easy to get. A variable's main-effect coefficient always gets a weight of 1. For each other coefficient of a term involving that variable, the weight is the mean of the product of the other variables in that term. For example, if we have five "raw" variables x1, x2, x3, x4, x5, plus four two-way interactions (x1,x2), (x1,x3), (x2,x3), (x4,x5), and one three-way interaction (x1,x2,x3), then the model is

y = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 +
    b12*x1*x2 + b13*x1*x3 + b23*x2*x3 + b45*x4*x5 +
    b123*x1*x2*x3 + e

and the overall main effects are

B1 = b1 + b12*M[x2] + b13*M[x3] + b123*M[x2*x3],

B2 = b2 + b12*M[x1] + b23*M[x3] + b123*M[x1*x3],

B3 = b3 + b13*M[x1] + b23*M[x2] + b123*M[x1*x2],

B4 = b4 + b45*M[x5],

B5 = b5 + b45*M[x4],

where M[.] denotes the sample mean of the quantity inside the brackets. All the product terms inside the brackets are among those that were constructed in order to do the regression, so a regression program should already know about them and should be able to print their means on request.

In models that have only main effects and two-way interactions, there is a simpler way to get the overall effects: center[1] the raw variables at their means. This is to be done prior to computing the product terms, and is not to be done to the products. Then all the M[.] expressions will become 0, and the regression coefficients will be interpretable as overall effects. The values of the b's will change; the values of the B's will not. Only the variables that are involved in interactions need to be centered, but there is usually no harm in centering other measured variables. The general effect of centering a variable is that, in addition to changing the intercept, it changes only the coefficients of other variables that interact with the centered variable. In particular, it does not change the coefficients of any terms that involve the centered variable. In the example given above, centering x1 would change b0, b2, b3, and b23.

[1 -- "Centering" is used by different people in ways that differ just enough to cause confusion. As used here, "centering a variable at #" means subtracting # from all the scores on the variable, converting the original scores to deviations from #.]

So why not always center at the means, routinely? Three reasons. First, the main-effect coefficients of the uncentered variables may themselves be of interest. Centering in such cases would be counter-productive, since it changes the main-effect coefficients of other variables.

Second, centering will make all the M[.] expressions 0, and thus convert simple effects to overall effects, only in models with no three-way or higher interactions. If the model contains such interactions then the b -> B computations must still be done, even if all the variables are centered at their means.

Third, centering at a value such as the mean, that is defined by the distribution of the predictors as opposed to being chosen rationally, means that all coefficients that are affected by centering will be specific to your particular sample. If you center at the mean then someone attempting to replicate your study must center at your mean, not their own mean, if they want to get the same coefficients that you got. The solution to this problem is to center each variable at a rationally chosen central value of that variable that depends on the meaning of the scores and does not depend on the distribution of the scores. However, the b -> B computations still remain necessary.

The significance of the overall effects may be tested by the usual procedures for testing linear combinations of regression coefficients. However, the results must be interpreted with care because the overall effects are not structural parameters but are design-dependent. The structural parameters -- the regression coefficients (uncentered, or with rational centering) and the error variance -- may be expected to remain invariant under changes in the distribution of the predictors, but the overall effects will generally change. The overall effects are specific to the particular sample and should not be expected to carry over to other samples with different distributions on the predictors. If an overall effect is significant in one study and not in another, it may reflect nothing more than a difference in the distribution of the predictors. In particular, it should not be taken as evidence that the relation of the dependent variable to the predictors is different in the two studies.

Related Question