I expect that there would be some difference in the training and CV AUC scores, but should this much of a difference be of concern? If not, how should I interpret and report these results? If it is of concern, what are some possible reasons for the differences and strategies I can take to fix them?
You are overfitting the training data. The stark decrease in AUC shows that given new data, your model would likely not perform as well as it does for the training data.
- The data are ordered by date, but I permutated the data row-wise before using gbm.fixed and predict.gbm. Also, from what I understand gbm.step also randomized the data.
Structured dependency in the data is something that you should try to capture in the model. If date/time is important then you should find a way to include it. Admittedly this is ignored by most machine learning algorithms.
- Could I have too few observations or too many variables? Could overfitting be an issue?
Yes, your results are the definition of overfitting.
- All of the individual animals are pooled together, could differences in the preferences of each individual animal be contributing.
It is possible. Another consideration for model development.
- Could the number CV folds in the gbm.step or gbm. simplify be at play?
Yes, read about the bias-variance trade-off.
As @aginensky mentioned in the comments thread, it's impossible to get in the author's head, but BRT is most likely simply a clearer description of gbm
's modeling process which is, forgive me for stating the obvious, boosted classification and regression trees. And since you've asked about boosting, gradients, and regression trees, here are my plain English explanations of the terms. FYI, CV is not a boosting method but rather a method to help identify optimal model parameters through repeated sampling. See here for some excellent explanations of the process.
Boosting is a type of ensemble method. Ensemble methods refer to a collection of methods by which final predictions are made by aggregating predictions from a number of individual models. Boosting, bagging, and stacking are some widely-implemented ensemble methods. Stacking involves fitting a number of different models individually (of any structure of your own choosing) and then combining them in a single linear model. This is done by fitting the individual models' predictions against the dependent variable. LOOCV SSE is normally used to determine regression coefficients and each model is treated as a basis function (to my mind, this is very, very similar to GAM). Similarly, bagging involves fitting a number of similarly-structured models to bootstrapped samples. At the risk of once again stating the obvious, stacking and bagging are parallel ensemble methods.
Boosting , however, is a sequential method. Friedman and Ridgeway both describe the algorithmic process in their papers so I won't insert it here just this second, but the plain English (and somewhat simplified) version is that you fit one model after the other, with each subsequent model seeking to minimize residuals weighted by the previous model's errors (the shrinkage parameter is the weight allocated to each prediction's residual error from the previous iteration and the smaller you can afford to have it, the better). In an abstract sense, you can think of boosting as a very human-like learning process where we apply past experiences to new iterations of tasks we have to perform.
Now, the gradient part of the whole thing comes from the method used to determine the optimal number of models (referred to as iterations in the gbm
documentation) to be used for prediction in order to avoid overfitting.
As you can see from the visual above (this was a classification application, but the same holds true for regression) the CV error drops quite steeply at first as the algorithm selects those models that will lead to the greatest drop in CV error before flattening out and climbing back up again as the ensemble begins to overfit. The optimal iteration number is the one corresponding to the CV error function's inflection point (function gradient equals 0), which is conveniently illustrated by the blue dashed line.
Ridgeway's gbm
implementation uses classification and regression trees and while I can't claim to read his mind, I would imagine that the speed and ease (to say nothing of their robustness to data shenanigans) with which trees can be fit had a pretty significant effect on his choice of modeling technique. That being said, while I might be wrong,I can't imagine a strictly theoretical reason why virtually any other modeling technique couldn't have been implemented. Again, I cannot claim to know Ridgeway's mind, but I imagine the generalized part of gbm
's name refers to the multitude of potential applications. The package can be used to perform regression (linear, Poisson, and quantile), binomial (using a number of different loss functions) and multinomial classification , and survival analysis (or at least hazard function calculation if the coxph distribution is any indication).
Elith's paper seems vaguely familiar (I think I ran into it last summer while looking into gbm-friendly visualization methods) and, if memory serves right, it featured an extension of the gbm
library, focusing on automated model tuning for regression (as in gaussian distribution, not binomial) applications and improved plot generation. I imagine the RBT nomenclature is there to help clarify the nature of the modeling technique, whereas GBM is more general.
Hope this helps clear a few things up.
Best Answer
This is what the paper says:
"As noted above, all of the input predictor variables are seldom equally relevant for prediction. Often only a few of them have substantial influence on the response; the vast majority are irrelevant and could just as well have not been measured. It is often useful to learn the relative importance or contribution of each input variable in predicting the response. For a single tree T, Breiman et al. [1] proposed a measure of (squared) relevance of your measure for each predictor variable xj, based on the number of times that variable was selected for splitting in the tree weighted by the squared improvement to the model as a result of each of those splits. This importance measure is easily generalized to additive tree expansions (3); it is simply averaged over the trees."
So if you can get the estimate, then you average it over all trees. So how many time was it used in a tree and its influence on improvement, depending on how that is being measured (e.g., accuracy). Have you had a look at the Breiman paper?
Breiman L, Friedman JH, Olshen R, Stone C. Classication and Regression Trees . Wadsworth: Pacic Grove, 1984