Solved – When to transform predictor variables when doing multiple regression

data transformationmultiple regression

I'm currently taking my first applied linear regression class at the graduate level, and am struggling with predictor variable transformations in multiple linear regression. The text I'm using, Kutner et al "Applied Linear Statistical Models" doesn't seem to cover the question I'm having. (apart from suggesting that there is a Box-Cox method for transforming multiple predictors).

When faced with a response variable and several predictor variables, what conditions does one strive to meet with each predictor variable? I understand we're ultimately looking for constancy of error variance and normally distributed errors (at least in the techniques I've been taught so far.) I've had many exercises come back, where the solution was, as an example y ~ x1 + (1/x2) + log(x3), where one or more predictors was transformed.

I understood the rationale under simple linear regression, since it was easy to look at y~x1 and the related diagnostics (q-q plots of residuals, residuals vs. y, residuals vs. x, etc) and test to see if y~log(x1) fit our assumptions better.

Is there a good place to start understanding when to transform a predictor in the presence of many predictors?

Thank you in advance.
Matt

Best Answer

I take your question to be: how do you detect when the conditions that make transformations appropriate exist, rather than what the logical conditions are. It's always nice to bookend data analyses with exploration, especially graphical data exploration. (Various tests can be conducted, but I'll focus on graphical EDA here.)

Kernel density plots are better than histograms for an initial overview of each variable's univariate distribution. With multiple variables, a scatterplot matrix can be handy. Lowess is also always advisable at the start. This will give you a quick and dirty look at whether the relationships are approximately linear. John Fox's car package usefully combines these:

library(car)
scatterplot.matrix(data)

Be sure to have your variables as columns. If you have many variables, the individual plots can be small. Maximize the plot window and the scatterplots should be big enough to pick out the plots you want to examine individually, and then make single plots. E.g.,

windows()
plot(density(X[,3]))
rug(x[,3])
windows()
plot(x[,3], y)
lines(lowess(y~X[,3]))

After fitting a multiple regression model, you should still plot and check your data, just as with simple linear regression. QQ plots for residuals are just as necessary, and you could do a scatterplot matrix of your residuals against your predictors, following a similar procedure as before.

windows()
qq.plot(model$residuals)
windows()
scatterplot.matrix(cbind(model$residuals,X))

If anything looks suspicious, plot it individually and add abline(h=0), as a visual guide. If you have an interaction, you can create an X[,1]*X[,2] variable, and examine the residuals against that. Likewise, you can make a scatterplot of residuals vs. X[,3]^2, etc. Other types of plots than residuals vs. x that you like can be done similarly. Bear in mind that these are all ignoring the other x dimensions that aren't being plotted. If your data are grouped (i.e. from an experiment), you can make partial plots instead of / in addition to marginal plots.

Hope that helps.

Related Question