They are different because the set of linear combinations of $N$ regression trees of size $S$ does not include all regression trees of size $ND$. This is easy to see when $S=1$, i.e. decision stumps. In that case, the regression function $f(x)$ obtained by boosting is additive
$$f(x) = \sum_{i=1}^p f_i(x_i).$$
If instead one were to grow a regression tree of size $N$, there would potentially be branches including splits of more than one variable. These correspond to interactions and thus lead to a function that does not admit an additive decomposition. This is an important feature of boosted stumps. One can often obtain better performance by boosting with a carefully chosen combination of $N$ and $S$ than by fitting a single regression tree.
Generally, the base learners are chosen with $S$ not very large, e.g. not more than five. On the other hand, random forests are a popular technique that typically uses much larger trees. In that case, bagging is used to reduce the variance associated with these larger trees.
As @aginensky mentioned in the comments thread, it's impossible to get in the author's head, but BRT is most likely simply a clearer description of gbm
's modeling process which is, forgive me for stating the obvious, boosted classification and regression trees. And since you've asked about boosting, gradients, and regression trees, here are my plain English explanations of the terms. FYI, CV is not a boosting method but rather a method to help identify optimal model parameters through repeated sampling. See here for some excellent explanations of the process.
Boosting is a type of ensemble method. Ensemble methods refer to a collection of methods by which final predictions are made by aggregating predictions from a number of individual models. Boosting, bagging, and stacking are some widely-implemented ensemble methods. Stacking involves fitting a number of different models individually (of any structure of your own choosing) and then combining them in a single linear model. This is done by fitting the individual models' predictions against the dependent variable. LOOCV SSE is normally used to determine regression coefficients and each model is treated as a basis function (to my mind, this is very, very similar to GAM). Similarly, bagging involves fitting a number of similarly-structured models to bootstrapped samples. At the risk of once again stating the obvious, stacking and bagging are parallel ensemble methods.
Boosting , however, is a sequential method. Friedman and Ridgeway both describe the algorithmic process in their papers so I won't insert it here just this second, but the plain English (and somewhat simplified) version is that you fit one model after the other, with each subsequent model seeking to minimize residuals weighted by the previous model's errors (the shrinkage parameter is the weight allocated to each prediction's residual error from the previous iteration and the smaller you can afford to have it, the better). In an abstract sense, you can think of boosting as a very human-like learning process where we apply past experiences to new iterations of tasks we have to perform.
Now, the gradient part of the whole thing comes from the method used to determine the optimal number of models (referred to as iterations in the gbm
documentation) to be used for prediction in order to avoid overfitting.
As you can see from the visual above (this was a classification application, but the same holds true for regression) the CV error drops quite steeply at first as the algorithm selects those models that will lead to the greatest drop in CV error before flattening out and climbing back up again as the ensemble begins to overfit. The optimal iteration number is the one corresponding to the CV error function's inflection point (function gradient equals 0), which is conveniently illustrated by the blue dashed line.
Ridgeway's gbm
implementation uses classification and regression trees and while I can't claim to read his mind, I would imagine that the speed and ease (to say nothing of their robustness to data shenanigans) with which trees can be fit had a pretty significant effect on his choice of modeling technique. That being said, while I might be wrong,I can't imagine a strictly theoretical reason why virtually any other modeling technique couldn't have been implemented. Again, I cannot claim to know Ridgeway's mind, but I imagine the generalized part of gbm
's name refers to the multitude of potential applications. The package can be used to perform regression (linear, Poisson, and quantile), binomial (using a number of different loss functions) and multinomial classification , and survival analysis (or at least hazard function calculation if the coxph distribution is any indication).
Elith's paper seems vaguely familiar (I think I ran into it last summer while looking into gbm-friendly visualization methods) and, if memory serves right, it featured an extension of the gbm
library, focusing on automated model tuning for regression (as in gaussian distribution, not binomial) applications and improved plot generation. I imagine the RBT nomenclature is there to help clarify the nature of the modeling technique, whereas GBM is more general.
Hope this helps clear a few things up.
Best Answer
After I obtained some help from the authors, I can write down now how I understand it. Somebody jump in, if there is disagreement.
Say, we have some differentiable loss function $L(y,H(x))$ , where $H(x)$ is our tree ensemble at some iteration. Let $g_i$ be the gradient of our loss function at some entry corresponding to observation i.
In each iteration, the gradient is our new label vector on which we fit a regression tree. Like, $\tilde{y_i} := g_i$
Let's only consider the gradient instances belonging to some parent node at some iteration. So, when I write $\forall g_i$ I mean all the instances in this parent node. Let $L = \left\{ g_j | x_{j,s} \leq d \right\}$ and define R similar. Then we search the best variable s with splitting point d for the next split.
Therefore, we choose s and d according to
$ \min_{s,d} \sum_{g_i \in L}^{}(g_i - \bar{g}_L)^2 + \sum_{g_i \in R}^{}(g_i - \bar{g}_R)^2 - \sum_{\forall g_i }^{}(g_i - \bar{g})^2 \\ \quad = \sum_{g_i \in L}^{}g_i^2 - n_L *\bar{g}_L^2 + \sum_{g_i \in R}^{}g_i^2 - n_R *\bar{g}_R^2 - (\sum_{\forall g_i}^{}g_i^2 - n *\bar{g}^2) $
(as $\sum_{g_i \in L}^{}g_i^2 + \sum_{g_i \in R}^{}g_i^2 = \sum_{\forall g_i}^{}g_i^2 $, these terms cancel out)
$\quad = - n_L *\bar{g}_L^2 - n_R *\bar{g}_R^2 + n *\bar{g}^2 $
Now, $n *\bar{g}^2$ is always the same, independent of how we make the split. Hence, for the minimization we can ignore it. Therefore, the minimization from the first line is equivalent to:
$ \min_{s,d}\quad - n_L *\bar{g}_L^2 - n_R *\bar{g}_R^2 $,
which is equivalent to
$ \max_{s,d} \quad n_L *\bar{g}_L^2 + n_R *\bar{g}_R^2 \\ \quad \quad = n_L * (\frac{1}{n_L}\sum_{g_i \in L}g_i)^2 + n_R * (\frac{1}{n_R}\sum_{g_i \in R}g_i)^2 \\ \quad\quad = n_L * \frac{1}{n_L^2} (\sum_{g_i \in L}g_i)^2 + n_R * \frac{1}{n_R^2} (\sum_{g_i \in R}g_i)^2 \\ \quad\quad = \frac{(\sum_{g_i \in L}g_i)^2}{n_L} + \frac{(\sum_{g_i \in R}g_i)^2}{n_R} $
This is almost exactly the formula from the picture but they weight this with the overall number of instances in the parent node. I assume, this is done to compare different splits between different nodes because they use best-first splitting.