XGBoost will produce different values for feature importances with different hyperparameters on the same dataset. When using XGBoost as a feature selection algorithm for a different model, should I therefore optimize the hyperparameters first? Or there are no hard and fast rules, and in practice I should try say both the default and the optimized set of hyperparameters and see what really works?
Solved – Feature selection with XGBoost
boostingfeature selectionhyperparameter
Related Solutions
For feature selection, we need a scoring function as well as a search method to optimize the scoring function.
You may use RF as a feature ranking method if you define some relevant importance score. RF will select features based on random with replacement method and group every subset in a separate subspace (called random subspace). One scoring function of importance could be based on assigning the accuracy of every tree for every feature in that random subspace. Then, you do this for every separate tree. Since, the source of generating the subspaces is random, you may put a threshold for computing the importance score.
Summary:
Step1: If feature X2 appears in 25% of the trees, then, score it. Otherwise, do not consider ranking the feature because we do not have sufficient information about its performance
Step2: Now, assign the performance score of every tree in which X2 appears to X2 and average the score. For example: perf(Tree1) = 0.85 perf(Tree2) = 0.70 perf(Tree3) = 0.30
Then, the importance of feature X2 = (0.85+0.70+0.30)/3 = 0.6167
You may consider a more advanced setting by including the split depth of the feature or the information gain value in the decision tree. There can be many ways to design a scoring function based on decision trees and RF.
Regarding the search method, your recursive method seems reasonable as a way to select the top ranked ones.
Finally, you may use RF either as a classifier or a regression model in selecting your features since both of them would supply you with a performance score. The score is indicative as it is based on the out-of-bag OOB samples and you may not consider cross-validation in a simpler setting.
I figured out where my understanding was off, figured I should answer my question in case anyone else stumbles upon it.
To start, sklearn makes nested cross-validation deceptively easy. I read their example over and over but never got it until I looked at the extremely helpful pseudocode given in the answer to this question.
Briefly, this is what I had to do (which is almost a copy of the example scikit-learn gives):
- Initialize two cross-validation generators, inner and outer. For this, I used the StratifiedKFold() constructor.
- Create a RandomizedSearchCV object (so much quicker than the whole grid search—I think one can easily use sklearn objects to calculate the Bayesian Information Criterion and make an even cooler/faster/smarter hyperparameter optimizer, but this is beyond my knowledge, I just heard Andreas Mueller talk about it in some lecture once) giving the inner cross-validator as the cv parameter, and the rest of your stuff, estimators, scoring function, etc. as normal.
- Fit this to your training set (X) and labels (y). You want to fit this because you'll need an estimator for the next step (i.e., the estimator you get after transforming X and y using estimators in your pipeline + fitting X and y using the final estimator to produce a fitted estimator).
- Use cross_val_score and give it your newly-fitted RandomizedSearchCV object, X, y, and the outer cross-validator. I assigned the outputs from this into a variable called scores and I returned a tuple consisting of a tuple with the best score and best parameters given by the randomized search (rs._best_params, rs._best_score) and the scores variable. I'm a little fuzzy on what exactly I needed and got a bit lazy, so this might be more information returned than necessary.
In code, this is kind of how it looks:
def nestedCrossValidation(X, y, pipe, param_dist, scoring, outer, inner):
rs = RandomizedSearchCV(pipe, param_dist, verbose=1, scoring=scoring, cv=inner)
rs.fit(X, y)
scores = cross_val_score(rs, X, y, cv=outer)
return ((rs._best_score, rs.best_params), scores)
cross_val_score will split into a training/test set and do a randomized search on that training set, which itself splits into a test/training set, generates the scores, then goes back up to cross_val_score to test and move on to the next test/training set.
AFTER you do this, you'll get a bunch of cross-validation scores. My original question was: "what do you get/do now?" Nested cross-validation is not for model selection. What I mean by that, is that you're not trying to get parameter values that are good for your final model. That's what the inner RandomizedSearchCV is for.
But of course, if you are using something like a RandomForest for feature selection in your pipeline, then you'd expect a different set of parameters each time! So what do you really get that's useful?
Nested cross-validation is to give an unbiased estimate as to how good your methodology/series of steps is. What is "good"? Good is defined by the stability of hyperparameters and the cross-validation scores you ultimately get. Say you get numbers like I did: I got cross-validation scores of: [0.57027027, 0.48918919, 0.37297297, 0.74444444, 0.53703704]. So depending on the mood of my method of doing things, I can get an ROC score between 0.37 and 0.74 — obviously this is undesirable. If you were to look at my hyper-parameters, you'd see that the "optimal" hyper-parameters vary wildly. Whereas if I got consistent cross-validation scores that were high, and the optimal hyper-parameters were all in the same ballpark, I can be fairly confident that the way I am choosing to select features and model my data is pretty good.
If you have instability—I am not sure what you can do. I'm still new to this—the gurus on this board probably have better advice other than blindly changing your methodology.
But if you have stability, what's next? This is another important aspect that I neglected to understand: a really good and predictive and generalizable model created by your training data is NOT the final model. But it's close. The final model uses all of your data, because you're done testing and optimizing and tweaking (yeah, if you'd try and cross-validate a model with data you used to fit it, you'd get a biased result, but why would you cross-validate it at this point? You've already done that, and hopefully a bias issue doesn't exist)—you give it all the data you can so it can make the most informed decisions it can, and the next time you'll see how well your model does, is when it's in the wild, using data that neither you nor the model has ever seen before.
I hope this helps someone. For some reason it took me a really long time to wrap my head around this, and here are some other links I used to understand:
http://www.pnas.org/content/99/10/6562.full.pdf — A paper that re-examines data and conclusions drawn by other genetics papers that don't use nested cross-validation for feature selection/hyper-parameter selection. It's somewhat comforting to know that even super smart and accomplished people also get swindled by statistics from time to time.
http://jmlr.org/papers/volume11/cawley10a/cawley10a.pdf — iirc, I've seen an author to this author answer a ton of questions about this topic on this forum
Training with the full dataset after cross-validation? — One of the aforementioned authors answering a similar question in a more colloquial manner.
http://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html — the sklearn example
Best Answer
From comments, Matthew Drury writes: