Overview of simple slopes analysis
- The points used to generate a graph for a simple slopes analyses are predicted values of the dependent variable given the model and various values of the predictor variables (see here for an example).
Are negative predicted values possible?
- Yes. A regression equation can give rise to negative predicted values.
- For example, if the equation was $Y = 1 - 2X -3Z - 1XZ$, then predicted $Y$ at $X=1$ and $Z=1$ is $Y = 1 - 2(1) -3(1) - 1(1)(1)$ or $1 -2 -3 - 1=-5$.
Do negative predicted values imply that you've done something wrong?
Possibly. If some or all of the following apply then negative predicted values might be a red flag on your analysis:
- the range of the dependent variable is all positive, and particularly if the values are a long way from 0.
- the correlation between predictors is not huge
- you have chosen appropriate values of predictors.
When performing a simple slopes analysis typical values for predictor values include:
continuous predictors that are part of the moderator effect: values drawn from plus or minus one or often two standard deviations above and below the mean or something similar.
categorical predictors that are part of the moderator effect: each of the values that the categorical predictor takes
- continuous covariates: the mean of the covariate
- categorical covariates: one of the categories
Thus, choosing predictor values consistent with the above prescriptions will generally give rise to predictions on the dependent variable that are in the ballpark of the range of the dependent variable.
Of course, there are situations that could legitimately give rise to negative values even when the range of the dependent variable includes all positive values. This might be related to non-normal errors, correlated predictors, etc.
I know this is really late, but you can do it using the pequod package.
library(pequod)
model.peq <- lmres(outcome ~ predictor*moderator1*moderator2, centered = c("predictor","moderator1","moderator2"), data=mydata)
S_slopes<-simpleSlope(model.peq, pred="predictor", mod1="moderator1", mod2="moderator2")
S_slopes
In the example, it seems like the error is really small so all the slopes are significant. If you change the data slightly, you get only one significant slope:
set.seed(123)
predictor <- rnorm(1000, 10, 5)
moderator1 <- rnorm(1000, 100, 25)
moderator2 <- rnorm(1000, 50, 20)
outcome <- predictor*moderator1*moderator2*rnorm(20, 0)/10000 # I changed this to zero from 30
mydata <- data.frame(predictor, moderator1, moderator2, outcome)
model.peq <- lmres(outcome ~ predictor*moderator1*moderator2, centered = c("predictor","moderator1","moderator2"), data=mydata)
S_slopes<-simpleSlope(model.peq, pred="predictor", mod1="moderator1", mod2="moderator2")
S_slopes
Best Answer
Terminology and Overview
In the context of multiple regression:
Thus, you should be able to run a hiearchical regression with moderators and covariates in just about any statistical software that supports multiple regression.
Typical approach to testing moderator effect after controlling for covariates
lm
each adding additional predictors and useanova
to compare the models. Here's a tutorial.Once you understand hierarchical regression in your chosen tool a simple recipe would be as follows. Let's assume that you have the following variables
IV1
IV2
IV1
andIV2
CV1
CV2
In some cases you may need to create the moderator
compute iv1byiv2 = iv1 * iv2.
). If you want to interpret the regression coefficients, you may find it useful to centeriv1
andiv2
before creating the interaction term.iv1*iv2
in the linear model notation.You can then estimate the models
m1 <- lm(DV~CV1+CV2)
m2 <- lm(DV~CV1+CV2+IV1+IV2)
m3 <- lm(DV~CV1+CV2+IV1*IV2)
You can then interpret the significance of the r-square change between block 2 and 3 as a test of whether there is an interaction effect:
anova(m2, m3)
Simple slopes analysis
If you want to perform simple slopes analysis, you can take the regression formula provided by the final multiple regression and calculate some appropriate values to plot.
You can do this by hand or you can use
predict
in R. For example, you might calculate the values predicted by the regression equation using the following valuesYou can then plot these values using whatever plotting tool that you like (e.g., R, SPSS, Excel).
Personally, I find Conditioning Plots a better option than simple slopes analysis. R has the
coplot
function. The idea is to show a scatter plot of the relationship between IV and DV in a set of arranged scatterplots defined by ranges of the moderator. When I searched, I found an example of using conditioning plots for moderator regression on page 585 of Handbook of Research Methods in Personality Psychology