Random Forest – Do Random Forest Model Predictions Have a Prediction Interval?

confidence intervalrrandom forest

If I run a randomForest model, I can then make predictions based on the model. Is there a way to get a prediction interval of each of the predictions such that I know how "sure" the model is of its answer. If this is possible is it simply based on the variability of the dependent variable for the whole model or will it have wider and narrower intervals depending on the particular decision tree that was followed for a particular prediction?

Best Answer

This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post.

Remember what a prediction interval is, it is an interval or set of values where we predict that future observations will lie. Generally the prediction interval has 2 main pieces that determine its width, a piece representing the uncertainty about the predicted mean (or other parameter) this is the confidence interval part, and a piece representing the variability of the individual observations around that mean. The confidence interval is fairy robust due to the Central Limit Theorem and in the case of a random forest, the bootstrapping helps as well. But the prediction interval is completely dependent on the assumptions about how the data is distributed given the predictor variables, CLT and bootstrapping have no effect on that part.

The prediction interval should be wider where the corresponding confidence interval would also be wider. Other things that would affect the width of the prediction interval are assumptions about equal variance or not, this has to come from the knowledge of the researcher, not the random forest model.

A prediction interval does not make sense for a categorical outcome (you could do a prediction set rather than an interval, but most of the time it would probably not be very informative).

We can see some of the issues around prediction intervals by simulating data where we know the exact truth. Consider the following data:

set.seed(1)

x1 <- rep(0:1, each=500)
x2 <- rep(0:1, each=250, length=1000)

y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)

This particular data follows the assumptions for a linear regression and is fairly straight forward for a random forest fit. We know from the "true" model that when both predictors are 0 that the mean is 10, we also know that the individual points follow a normal distribution with standard deviation of 1. This means that the 95% prediction interval based on perfect knowledge for these points would be from 8 to 12 (well actually 8.04 to 11.96, but rounding keeps it simpler). Any estimated prediction interval should be wider than this (not having perfect information adds width to compensate) and include this range.

Let's look at the intervals from regression:

fit1 <- lm(y ~ x1 * x2)

newdat <- expand.grid(x1=0:1, x2=0:1)

(pred.lm.ci <- predict(fit1, newdat, interval='confidence'))
#        fit       lwr      upr
# 1 10.02217  9.893664 10.15067
# 2 14.90927 14.780765 15.03778
# 3 20.02312 19.894613 20.15162
# 4 21.99885 21.870343 22.12735

(pred.lm.pi <- predict(fit1, newdat, interval='prediction'))
#        fit      lwr      upr
# 1 10.02217  7.98626 12.05808
# 2 14.90927 12.87336 16.94518
# 3 20.02312 17.98721 22.05903
# 4 21.99885 19.96294 24.03476

We can see there is some uncertainty in the estimated means (confidence interval) and that gives us a prediction interval that is wider (but includes) the 8 to 12 range.

Now let's look at the interval based on the individual predictions of individual trees (we should expect these to be wider since the random forest does not benefit from the assumptions (which we know to be true for this data) that the linear regression does):

library(randomForest)
fit2 <- randomForest(y ~ x1 + x2, ntree=1001)

pred.rf <- predict(fit2, newdat, predict.all=TRUE)

pred.rf.int <- apply(pred.rf$individual, 1, function(x) {
  c(mean(x) + c(-1, 1) * sd(x), 
  quantile(x, c(0.025, 0.975)))
})

t(pred.rf.int)
#                           2.5%    97.5%
# 1  9.785533 13.88629  9.920507 15.28662
# 2 13.017484 17.22297 12.330821 18.65796
# 3 16.764298 21.40525 14.749296 21.09071
# 4 19.494116 22.33632 18.245580 22.09904

The intervals are wider than the regression prediction intervals, but they don't cover the entire range. They do include the true values and therefore may be legitimate as confidence intervals, but they are only predicting where the mean (predicted value) is, no the added piece for the distribution around that mean. For the first case where x1 and x2 are both 0 the intervals don't go below 9.7, this is very different from the true prediction interval that goes down to 8. If we generate new data points then there will be several points (much more than 5%) that are in the true and regression intervals, but don't fall in the random forest intervals.

To generate a prediction interval you will need to make some strong assumptions about the distribution of the individual points around the predicted means, then you could take the predictions from the individual trees (the bootstrapped confidence interval piece) then generate a random value from the assumed distribution with that center. The quantiles for those generated pieces may form the prediction interval (but I would still test it, you may need to repeat the process several more times and combine).

Here is an example of doing this by adding normal (since we know the original data used a normal) deviations to the predictions with the standard deviation based on the estimated MSE from that tree:

pred.rf.int2 <- sapply(1:4, function(i) {
  tmp <- pred.rf$individual[i, ] + rnorm(1001, 0, sqrt(fit2$mse))
  quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
#           2.5%    97.5%
# [1,]  7.351609 17.31065
# [2,] 10.386273 20.23700
# [3,] 13.004428 23.55154
# [4,] 16.344504 24.35970

These intervals contain those based on perfect knowledge, so look reasonable. But, they will depend greatly on the assumptions made (the assumptions are valid here because we used the knowledge of how the data was simulated, they may not be as valid in real data cases). I would still repeat the simulations several times for data that looks more like your real data (but simulated so you know the truth) several times before fully trusting this method.

Related Question