Solved – Test whether simple slopes are different from zero in 3-way interaction in multiple regression

data visualizationinteractionmultiple regressionrregression

I'm trying to test whether 4 different slopes from a 3-way interaction in multiple regression are significantly different from zero. The four lines are plotted at 2 levels of each of the 2 moderators (lo-lo, hi-lo, lo-hi, hi-hi).

Here's how we could test the significance slopes in a 2-way interaction model. I would like to extend this method to test the significance of slopes in a 3-way interaction model in multiple regression. A 2-way interaction in multiple regression takes the following form:

 y = a + b1(X) + b2(Z) + b3(X)(Z)

With a 2-way interaction, one can examine whether the slopes at various levels of the moderator, Z, are significantly different from zero using the following equations:

 b1 at Z = b1 + b3(Z)

where b1 is the slope of the predicted effects of X on Y at any particular value of Z

 SE(b1 at Z) = (var(b1) + (Z^2)(var(b3) + (2Z)(cov(b1,b3))^(1/2)

where var(b1) is the variance of the b1 regression coefficient, var(b3) is the variance of the b3 regression coefficient, cov(b1,b3) is the covariance between the b1, b3 regression coefficients

 t = (b1 at Z)/SE(b1 at Z)
 df = N - k - 1

where N=sample size and k=number of predictors

One can then test the significance of each slope using a t-test. My question is, how can I extend this to the significance of slopes in a 3-way interaction model? See my example R syntax below:

set.seed(123)
predictor <- rnorm(1000, 10, 5)
moderator1 <- rnorm(1000, 100, 25)
moderator2 <- rnorm(1000, 50, 20)
outcome <- predictor*moderator1*moderator2*rnorm(20, 30)/10000
mydata <- data.frame(predictor, moderator1, moderator2, outcome)

model <- lm(outcome ~ predictor + moderator1 + moderator2 + predictor*moderator1 + predictor*moderator2 + moderator1*moderator2 + predictor*moderator1*moderator2, data=mydata)
plotData <- expand.grid(
                    predictor = pretty(qnorm(pnorm(c(-1, 1)), mean = mean(mydata$predictor, na.rm = TRUE), sd = sd(mydata$predictor, na.rm = TRUE))),
                    moderator1 = qnorm(pnorm(c(-1, 1)), mean = mean(mydata$moderator1, na.rm = TRUE), sd = sd(mydata$moderator1, na.rm = TRUE)),
                    moderator2 = qnorm(pnorm(c(-1, 1)), mean = mean(mydata$moderator2, na.rm = TRUE), sd = sd(mydata$moderator2, na.rm = TRUE))
                    )

plotData$outcome <- predict(model, newdata = plotData, level = 0)
    plotData$mod1 <- factor(plotData$moderator1, labels = c("Lo mod1", "Hi mod1"))
    plotData$mod2 <- factor(plotData$moderator2, labels = c("Lo mod2", "Hi mod2"))

mod1Lo_mod2Lo <- plotData[plotData$mod1=="Lo mod1" & plotData$mod2=="Lo mod2",]
mod1Hi_mod2Lo <- plotData[plotData$mod1=="Hi mod1" & plotData$mod2=="Lo mod2",]
mod1Lo_mod2Hi <- plotData[plotData$mod1=="Lo mod1" & plotData$mod2=="Hi mod2",]
mod1Hi_mod2Hi <- plotData[plotData$mod1=="Hi mod1" & plotData$mod2=="Hi mod2",]

#Generate Plot
plot(mod1Lo_mod2Lo$predictor, mod1Lo_mod2Lo$outcome, lty=1, lwd=2, type='l', xlab="predictor", ylab="outcome", ylim=c(min(plotData$outcome), max(plotData$outcome)))
lines(mod1Hi_mod2Lo$predictor, mod1Hi_mod2Lo$outcome, lty=2, lwd=2)
lines(mod1Lo_mod2Hi$predictor, mod1Lo_mod2Hi$outcome, lty=1, lwd=2, col="gray")
lines(mod1Hi_mod2Hi$predictor, mod1Hi_mod2Hi$outcome, lty=2, lwd=2, col="gray")
legend("topleft", legend=c("lo Mod1, lo Mod2","hi Mod1, lo Mod2","lo Mod1, hi Mod2","hi Mod1, hi Mod2"), lty=c(1,2,1,2), lwd=c(2,2,2,2), col=c("black","black","grey","grey"))

How can I test whether the slopes of each of these four lines, separately, is different from zero?

Many thanks in advance!

Best Answer

I know this is really late, but you can do it using the pequod package.

library(pequod)
model.peq <- lmres(outcome ~ predictor*moderator1*moderator2, centered =  c("predictor","moderator1","moderator2"), data=mydata)
S_slopes<-simpleSlope(model.peq, pred="predictor", mod1="moderator1", mod2="moderator2")
S_slopes

In the example, it seems like the error is really small so all the slopes are significant. If you change the data slightly, you get only one significant slope:

set.seed(123)
predictor <- rnorm(1000, 10, 5)
moderator1 <- rnorm(1000, 100, 25)
moderator2 <- rnorm(1000, 50, 20)
outcome <- predictor*moderator1*moderator2*rnorm(20, 0)/10000 # I changed this to zero from 30
mydata <- data.frame(predictor, moderator1, moderator2, outcome)
model.peq <- lmres(outcome ~ predictor*moderator1*moderator2, centered =  c("predictor","moderator1","moderator2"), data=mydata)
S_slopes<-simpleSlope(model.peq, pred="predictor", mod1="moderator1", mod2="moderator2")
S_slopes
Related Question