Your second procedure assumes you have some other feature selection algorithm (for example, stepwise regression with some stopping rule), distinct from the cross-validation. If you don't have this, you'll just have to use the first procedure (where cross-validation is the whole feature-selection algorithm).
Also, even if the second procedure is applicable, the first procedure might do better. In the second procedure, a greedy feature-selection algorithm might always pick models that are overfit to the training data. Then the CV would only let you choose among these bad models. This shouldn't happen in the first procedure.
On the other hand, if your problem does have a specialized feature-selection algorithm which is computationally-efficient, then the second procedure may run much faster than the first.
If you do use the second procedure, one way to choose a best feature set is to let CV choose the model size. At every model size, you might compare different models on each data split, but average their test errors across all splits. This way, you can use CV to decide which model size gives the best estimated performance. Finally, rerun your feature-selection algorithm on the full dataset, up to the size chosen by CV, and use this as the final feature set.
I figured out where my understanding was off, figured I should answer my question in case anyone else stumbles upon it.
To start, sklearn makes nested cross-validation deceptively easy. I read their example over and over but never got it until I looked at the extremely helpful pseudocode given in the answer to this question.
Briefly, this is what I had to do (which is almost a copy of the example scikit-learn gives):
- Initialize two cross-validation generators, inner and outer. For this, I used the StratifiedKFold() constructor.
- Create a RandomizedSearchCV object (so much quicker than the whole grid search—I think one can easily use sklearn objects to calculate the Bayesian Information Criterion and make an even cooler/faster/smarter hyperparameter optimizer, but this is beyond my knowledge, I just heard Andreas Mueller talk about it in some lecture once) giving the inner cross-validator as the cv parameter, and the rest of your stuff, estimators, scoring function, etc. as normal.
- Fit this to your training set (X) and labels (y). You want to fit this because you'll need an estimator for the next step (i.e., the estimator you get after transforming X and y using estimators in your pipeline + fitting X and y using the final estimator to produce a fitted estimator).
- Use cross_val_score and give it your newly-fitted RandomizedSearchCV object, X, y, and the outer cross-validator. I assigned the outputs from this into a variable called scores and I returned a tuple consisting of a tuple with the best score and best parameters given by the randomized search (rs._best_params, rs._best_score) and the scores variable. I'm a little fuzzy on what exactly I needed and got a bit lazy, so this might be more information returned than necessary.
In code, this is kind of how it looks:
def nestedCrossValidation(X, y, pipe, param_dist, scoring, outer, inner):
rs = RandomizedSearchCV(pipe, param_dist, verbose=1, scoring=scoring, cv=inner)
rs.fit(X, y)
scores = cross_val_score(rs, X, y, cv=outer)
return ((rs._best_score, rs.best_params), scores)
cross_val_score will split into a training/test set and do a randomized search on that training set, which itself splits into a test/training set, generates the scores, then goes back up to cross_val_score to test and move on to the next test/training set.
AFTER you do this, you'll get a bunch of cross-validation scores. My original question was: "what do you get/do now?" Nested cross-validation is not for model selection. What I mean by that, is that you're not trying to get parameter values that are good for your final model. That's what the inner RandomizedSearchCV is for.
But of course, if you are using something like a RandomForest for feature selection in your pipeline, then you'd expect a different set of parameters each time! So what do you really get that's useful?
Nested cross-validation is to give an unbiased estimate as to how good your methodology/series of steps is. What is "good"? Good is defined by the stability of hyperparameters and the cross-validation scores you ultimately get. Say you get numbers like I did: I got cross-validation scores of: [0.57027027, 0.48918919, 0.37297297, 0.74444444, 0.53703704]. So depending on the mood of my method of doing things, I can get an ROC score between 0.37 and 0.74 — obviously this is undesirable. If you were to look at my hyper-parameters, you'd see that the "optimal" hyper-parameters vary wildly. Whereas if I got consistent cross-validation scores that were high, and the optimal hyper-parameters were all in the same ballpark, I can be fairly confident that the way I am choosing to select features and model my data is pretty good.
If you have instability—I am not sure what you can do. I'm still new to this—the gurus on this board probably have better advice other than blindly changing your methodology.
But if you have stability, what's next? This is another important aspect that I neglected to understand: a really good and predictive and generalizable model created by your training data is NOT the final model. But it's close. The final model uses all of your data, because you're done testing and optimizing and tweaking (yeah, if you'd try and cross-validate a model with data you used to fit it, you'd get a biased result, but why would you cross-validate it at this point? You've already done that, and hopefully a bias issue doesn't exist)—you give it all the data you can so it can make the most informed decisions it can, and the next time you'll see how well your model does, is when it's in the wild, using data that neither you nor the model has ever seen before.
I hope this helps someone. For some reason it took me a really long time to wrap my head around this, and here are some other links I used to understand:
http://www.pnas.org/content/99/10/6562.full.pdf — A paper that re-examines data and conclusions drawn by other genetics papers that don't use nested cross-validation for feature selection/hyper-parameter selection. It's somewhat comforting to know that even super smart and accomplished people also get swindled by statistics from time to time.
http://jmlr.org/papers/volume11/cawley10a/cawley10a.pdf — iirc, I've seen an author to this author answer a ton of questions about this topic on this forum
Training with the full dataset after cross-validation? — One of the aforementioned authors answering a similar question in a more colloquial manner.
http://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html — the sklearn example
Best Answer
The most important downside for searching along single parameters instead of optimizing them all together is that you ignore interactions. It is quite common that e.g. more than one parameter influences model complexity. In that case, you need to look at the interaction in order to sucessfully optimize the hyperparameters.
Depending on how large your data set is and how many models you compare, optimization strategies that return the maximum observed performance run into trouble (true for both grid search and your strategy). The reason is that searching through a large number of performance estimates for the maximum "skims" the variance of the performance estimate: you may just end up with a model and train/test split combination that accidentally happens to look good. Even worse, you may get several perfect looking combinations, and the optimization then cannot know which model to choose and thus becomes unstable.