Solved – Nested Cross-Validation for Feature Selection and Hyperparameter Optimization

feature selectionhyperparameterlogisticmodel selectionrandom forest

I spent quite a few hours trying to understand nested cross-validation and try and make an implementation myself — I'm really uncertain if I am doing this right, and I am not sure how to test if I am doing it right other than ask experts if I am indeed doing it right.

I am trying optimize the hyperparameters of my feature selector. Initially, I fell into the trap of introducing selection bias in my evaluation of my model, similar to what was described here, namely, I was trying to optimize my hyperparameters using the same data I selected features with.

Here is my understanding of nested cross-validation mixed with how I am trying to use it: (using the number of folds = 5 for simplicity):

  1. Split all the data into 5 sets, four of which will be used to train, one of which will be used to test.
  2. Create the parameter grid I want to search over, and do a grid search over the parameter grid with 5-fold cross validation, using the "outer" training data, and store the parameter set that best maximizes the ROC score (using sklearn's GridSearchCV, which is why this is all in one step).
  3. Now that I have the best parameters for my random forest (the best for this iteration), fit that Random Forest model to the "outer" training data, and transform the outer training, and test data (i.e., use the best feature selector obtained by (2) to actually feature select).
  4. Fit my "main" model (logistic regression), using the transformed outer data
  5. Using the transformed outer testing data, use the fitted logistic regression model made in (4) to make predictions, then find the ROC score of those predictions.
  6. Store the best parameters found in (3) and the score found in (5), and start again from (1), using a different set from the original 5 as the testing set, and the other 4 as training sets.
  7. At the end of it all, the parameters associated with the highest score in (6) should be used as the parameters for my Random Forest feature selection process.

During my research on this topic, I saw the phrases "internal validation is not enough!" and "don't validate with data used to train" often. By internal validation, I think they're referring to what I do in (2), and by doing a larger cross validation on the results I get from it, I think I am addressing the "not enough" part and also using different data to train and fit by having this outer validation. I think I am also addressing selection bias by repeating the feature selection each iteration of the outer cv.

Am I missing something? When looking at examples of other people doing this, it seems like they use nested cross-validation for either optimizing hyperparameters or to feature select. That makes me feel that I should have another nest somewhere, but I don't see where it would work. Also the "random" part of Random Forests is giving me uncertainty that I am doing anything generalizable rather than something that's entirely dependent on chance.

Also, when people say "use the inner CV to pick parameters", I am not really sure what that means, because the inner CV is run multiple times, isn't it?

Best Answer

I figured out where my understanding was off, figured I should answer my question in case anyone else stumbles upon it.

To start, sklearn makes nested cross-validation deceptively easy. I read their example over and over but never got it until I looked at the extremely helpful pseudocode given in the answer to this question.

Briefly, this is what I had to do (which is almost a copy of the example scikit-learn gives):

  1. Initialize two cross-validation generators, inner and outer. For this, I used the StratifiedKFold() constructor.
  2. Create a RandomizedSearchCV object (so much quicker than the whole grid search—I think one can easily use sklearn objects to calculate the Bayesian Information Criterion and make an even cooler/faster/smarter hyperparameter optimizer, but this is beyond my knowledge, I just heard Andreas Mueller talk about it in some lecture once) giving the inner cross-validator as the cv parameter, and the rest of your stuff, estimators, scoring function, etc. as normal.
  3. Fit this to your training set (X) and labels (y). You want to fit this because you'll need an estimator for the next step (i.e., the estimator you get after transforming X and y using estimators in your pipeline + fitting X and y using the final estimator to produce a fitted estimator).
  4. Use cross_val_score and give it your newly-fitted RandomizedSearchCV object, X, y, and the outer cross-validator. I assigned the outputs from this into a variable called scores and I returned a tuple consisting of a tuple with the best score and best parameters given by the randomized search (rs._best_params, rs._best_score) and the scores variable. I'm a little fuzzy on what exactly I needed and got a bit lazy, so this might be more information returned than necessary.

In code, this is kind of how it looks:

def nestedCrossValidation(X, y, pipe, param_dist, scoring, outer, inner):
    rs = RandomizedSearchCV(pipe, param_dist, verbose=1, scoring=scoring, cv=inner)
    rs.fit(X, y)
    scores = cross_val_score(rs, X, y, cv=outer)
    return ((rs._best_score, rs.best_params), scores)

cross_val_score will split into a training/test set and do a randomized search on that training set, which itself splits into a test/training set, generates the scores, then goes back up to cross_val_score to test and move on to the next test/training set.

AFTER you do this, you'll get a bunch of cross-validation scores. My original question was: "what do you get/do now?" Nested cross-validation is not for model selection. What I mean by that, is that you're not trying to get parameter values that are good for your final model. That's what the inner RandomizedSearchCV is for.

But of course, if you are using something like a RandomForest for feature selection in your pipeline, then you'd expect a different set of parameters each time! So what do you really get that's useful?

Nested cross-validation is to give an unbiased estimate as to how good your methodology/series of steps is. What is "good"? Good is defined by the stability of hyperparameters and the cross-validation scores you ultimately get. Say you get numbers like I did: I got cross-validation scores of: [0.57027027, 0.48918919, 0.37297297, 0.74444444, 0.53703704]. So depending on the mood of my method of doing things, I can get an ROC score between 0.37 and 0.74 — obviously this is undesirable. If you were to look at my hyper-parameters, you'd see that the "optimal" hyper-parameters vary wildly. Whereas if I got consistent cross-validation scores that were high, and the optimal hyper-parameters were all in the same ballpark, I can be fairly confident that the way I am choosing to select features and model my data is pretty good.

If you have instability—I am not sure what you can do. I'm still new to this—the gurus on this board probably have better advice other than blindly changing your methodology.

But if you have stability, what's next? This is another important aspect that I neglected to understand: a really good and predictive and generalizable model created by your training data is NOT the final model. But it's close. The final model uses all of your data, because you're done testing and optimizing and tweaking (yeah, if you'd try and cross-validate a model with data you used to fit it, you'd get a biased result, but why would you cross-validate it at this point? You've already done that, and hopefully a bias issue doesn't exist)—you give it all the data you can so it can make the most informed decisions it can, and the next time you'll see how well your model does, is when it's in the wild, using data that neither you nor the model has ever seen before.

I hope this helps someone. For some reason it took me a really long time to wrap my head around this, and here are some other links I used to understand:

http://www.pnas.org/content/99/10/6562.full.pdf — A paper that re-examines data and conclusions drawn by other genetics papers that don't use nested cross-validation for feature selection/hyper-parameter selection. It's somewhat comforting to know that even super smart and accomplished people also get swindled by statistics from time to time.

http://jmlr.org/papers/volume11/cawley10a/cawley10a.pdf — iirc, I've seen an author to this author answer a ton of questions about this topic on this forum

Training with the full dataset after cross-validation? — One of the aforementioned authors answering a similar question in a more colloquial manner.

http://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html — the sklearn example

Related Question