Boosted Regression Trees – Reconciling Boosted Regression Trees, Generalized Boosted Models, and Gradient Boosting Machine

boostingmachine learning

Questions:

  1. What is the difference(s) between boosted regression trees (BRT) and
    generalized boosted models (GBM)? Can they be used interchangeably?
    Is one a specific form of the other?
  2. Why did Ridgeway use the phrase "Generalized Boosted Regression
    Models" (GBM), to describe what Friedman had previously proposed as
    "Gradient Boosting Machine" (GBM)? These two acronyms are identical,
    describe the same thing, but are derived from different phrases.

Background:

I am having trouble determining how the terms BRT and GBM differ. From what I understand both are terms to describe classification and regression trees that have stochasticity incorporated through some sort of boosting (e.g. bagging, bootstrapping, cross-validation).
Also, from what I gather the term GBM was first coined by Friedman (2001) in his paper "Greedy function approximation: a gradient boosting machine". Ridgeway then implemented the procedure described by Friedman in 2006 in his package "Generalized Boosted Regression Models" (GBM). In my field (ecology) Elith et al. (2008) was the first to demonstrate Ridgeway's gbm package for species distribution modelling. However, the authors in Elith et al. use the term "boosted regression tree" (BRT) to describe Friedman and Ridgeway's GBM theory and implementation.

I am confused as to if these terms can be used interchangeably? It is somewhat confusing that one author would use the same acronym (from a different phrase) to describe the same theory that a previous author proposed. It is also confusing that the third author used a completely different term when describing this theory in ecological terms.

The best I can come up with is that BRT are a specific form of GBM in which the distribution is binomial, but I am not sure of this.

Elith et al. define boosted regression trees like this… "Boosted regression trees combine the strengths of two algorithms: regression trees (models that relate a response to their predictors by recursive binary splits) and boosting (an adaptive method for combining many simple models to give improved predictive performance). The final BRT model can be understood as an additive regression model in which individual terms are simple trees, fitted in a forward, stagewise fashion" (Elith et al. 2008).

Best Answer

As @aginensky mentioned in the comments thread, it's impossible to get in the author's head, but BRT is most likely simply a clearer description of gbm's modeling process which is, forgive me for stating the obvious, boosted classification and regression trees. And since you've asked about boosting, gradients, and regression trees, here are my plain English explanations of the terms. FYI, CV is not a boosting method but rather a method to help identify optimal model parameters through repeated sampling. See here for some excellent explanations of the process.

Boosting is a type of ensemble method. Ensemble methods refer to a collection of methods by which final predictions are made by aggregating predictions from a number of individual models. Boosting, bagging, and stacking are some widely-implemented ensemble methods. Stacking involves fitting a number of different models individually (of any structure of your own choosing) and then combining them in a single linear model. This is done by fitting the individual models' predictions against the dependent variable. LOOCV SSE is normally used to determine regression coefficients and each model is treated as a basis function (to my mind, this is very, very similar to GAM). Similarly, bagging involves fitting a number of similarly-structured models to bootstrapped samples. At the risk of once again stating the obvious, stacking and bagging are parallel ensemble methods.

Boosting , however, is a sequential method. Friedman and Ridgeway both describe the algorithmic process in their papers so I won't insert it here just this second, but the plain English (and somewhat simplified) version is that you fit one model after the other, with each subsequent model seeking to minimize residuals weighted by the previous model's errors (the shrinkage parameter is the weight allocated to each prediction's residual error from the previous iteration and the smaller you can afford to have it, the better). In an abstract sense, you can think of boosting as a very human-like learning process where we apply past experiences to new iterations of tasks we have to perform.

Now, the gradient part of the whole thing comes from the method used to determine the optimal number of models (referred to as iterations in the gbm documentation) to be used for prediction in order to avoid overfitting. GBM training (black) and CV error (green) loss functions

As you can see from the visual above (this was a classification application, but the same holds true for regression) the CV error drops quite steeply at first as the algorithm selects those models that will lead to the greatest drop in CV error before flattening out and climbing back up again as the ensemble begins to overfit. The optimal iteration number is the one corresponding to the CV error function's inflection point (function gradient equals 0), which is conveniently illustrated by the blue dashed line.

Ridgeway's gbm implementation uses classification and regression trees and while I can't claim to read his mind, I would imagine that the speed and ease (to say nothing of their robustness to data shenanigans) with which trees can be fit had a pretty significant effect on his choice of modeling technique. That being said, while I might be wrong,I can't imagine a strictly theoretical reason why virtually any other modeling technique couldn't have been implemented. Again, I cannot claim to know Ridgeway's mind, but I imagine the generalized part of gbm's name refers to the multitude of potential applications. The package can be used to perform regression (linear, Poisson, and quantile), binomial (using a number of different loss functions) and multinomial classification , and survival analysis (or at least hazard function calculation if the coxph distribution is any indication).

Elith's paper seems vaguely familiar (I think I ran into it last summer while looking into gbm-friendly visualization methods) and, if memory serves right, it featured an extension of the gbm library, focusing on automated model tuning for regression (as in gaussian distribution, not binomial) applications and improved plot generation. I imagine the RBT nomenclature is there to help clarify the nature of the modeling technique, whereas GBM is more general.

Hope this helps clear a few things up.