Solved – How to get hyper parameters in nested cross validation

cross-validationhyperparameterscikit learn

I have read the following posts for nested cross validation and still am not 100% sure what I am to do with model selection with nested cross validation:

To explain my confusion, let me try to walk through the model selection with nested cross validation method step by step.

  1. Create an outer CV loop with K-Fold. This will be used to estimate the performance of the hyper-parameters that "won" each inner CV loops.
  2. Use GridSearchCV to create an inner CV loop where in each inner loop, GSCV goes through all possible combinations of the parameter space and comes up with the best set of parameters.
  3. After GSCV found the best parameters in the inner loop, it is tested with the test set in the outer loop to get an estimation of performance.
  4. The outer loop then updates to the next fold as the test set and the rest as training set, and 1-3 repeats. Total possible "winning" parameters are number of folds designated in the outer loop. So if the outer loop is 5 folds, then you will have a performance estimation of an algorithm with 5 different sets of hyper parameters, NOT the performance of one particular set of hyper parameters.

This approach is illustrated on SKLearn's example page:
http://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html

Question:
After 4., how do you determine which hyper parameters worked the best? I understand that you want to train your algorithm (e.g. Logistic Regression, Random Forest, etc.) with the COMPLETE data set at the end. But how do you determine which hyper parameters worked the best in your nested cross validation? My understanding is that for each inner loop, a different set of hyper parameters will win. And for the outer loop, you are getting an estimation of your GridSearchCV performance, but you are not getting any one particular set of hyper parameters. So, in the final model creation, how do you know what hyper parameters to use? That's the missing logic I have trouble understanding from other treads.

Thank you in advance for any tips, especially if @Dikran Marsupial and @cbeleites can chime in!

Edit: If you can, please in your answer use terms like "algorithm" and "hyper parameters". I think one source of confusion for me is when people use the term "model" or "model selection". I get confused whether they are talking about selecting which algorithm to use or what hyper parameters to use.

Edit 2: I have created a notebook that shows two ways of doing nested cross validation. First way is the one shown in the SKLearn example, and another longer way is one that I wrote. The way shown in SKLearn doesn't expose the "winning" hyperparameters, but my longer way does. But the question remains the same. After I have completed the nested cross validation, even with the hyperparameters exposed, what do I do now? As you can see from the hyperparameters at the end of the notebook, they vary quite a bit.

Best Answer

(I'm sure I wrote most of this already in some answer - but can't find it right now. If anyone stumbles across that answer, please link it). I see 2 slightly different approaches here, which I think are both sensible.

But first some terminology:

  • Coming from an applied field, a (fitted/trained) model for me is a ready-to-use. I.e. the model contains all information needed to generate predictions for new data. Thus, the model contains also the hyperparameters. As you will see, this point of view is closely related to approach 2 below.
  • OTOH, training algorithm in my experience is not well defined in the following sense: in order to get the (fitted) model, not only the - let's call it "primary fitting" - of the "normal" model parameters needs to be done, but also the hyperparameters need to be fixed. From my application perspective, there isn't really much difference between parameters and hyperparamers: both are part of the model, and need to be estimated/decided during training.
    I guess the difference between them is related to the difference between someone developing new training algorithms who'd usually describe a class of training algorithms together with some steering parameters (the hyperparameters) which are difficult/impossibe to fix (or at least to fix how they should be decided/estimated) without application/domain knowledge.

Approach 1: require stable optimization results

With this approach, "model training" is the fitting of the "normal" model parameters, and hyperparameters are given. An inner e.g. cross validation takes care of the hyperparameter optimization.

The crucial step/assumption here to solve the dilemma of whose hyperparameter set should be chosen is to require the optimization to be stable. Cross validation for validation purposes assumes that all surrogate models are sufficiently similar to the final model (obtained by the same training algorithm applied to the whole data set) to allow treating them as equal (among themselves as well as to the final model). If this assumption breaks down and

  1. the surrogate models are still equal (or equivalent) among themselves but not to the final model, we are talking about the well-known pessimistic bias of cross validation.

  2. If also the surrogate model are not equal/equivalent to each other, we have problems with instability.

For the optimization results of the inner loop this means that if the optimization is stable, there is no conflict in choosing hyperparameters. And if considerable variation is observed across the inner cross validation results, the optimization is not stable. Unstable training situations have far worse problems than just the decision which of the hyperparameter sets to choose, and I'd really recommend to step back in that case and start the modeling process all over.

There's an exception, here, though: there may be several local minima in the optimization yielding equal performance for practical purposes. Requiring also the choice among them to be stable may be an unnecessary strong requirement - but I don't know how to get out of this dilemma.

Note that if not all models yield the same winning parameter set, you should not use outer loop estimates as generalization error here:

  • If you claim generalization error for parameters $p$, all surrogate models entering into the validation should actually use exactly these parameters.
    (Imagine someone told you they did a cross validation on model with C = 1 and linear kernel and you find out some splits were evaluated with rbf kernel!)
  • But unless there is no decision involved as all splits yielded the same parameters, this will break independence in the outer loop: the test data of each split already entered the decision which parameter set wins as it was training data in all other splits and thus used to optimize the parameters.

Approach 2: treat hyperparameter tuning as part of the model training

This approach bridges the perspectives of the "training algorithm developer" and applied user of the training algorithm.

The training algorithm developer provides a "naked" training algorithm model = train_naked (trainingdata, hyperparameters). As the applied user needs tunedmodel = train_tuned (trainingdata) which also takes care of fixing the hyperparameters.

train_tuned can be implemented e.g. by wrapping a cross validation-based optimizer around the naked training algorithm train_naked.

train_tuned can then be used like any other training algorithm that does not require hyperparameter input, e.g. its output tunedmodel can be subjected to cross validation. Now the hyperparameters are checked for their stability just like the "normal" parameters should be checked for stability as part of the evaluation of the cross validation.

This is actually what you do and evaluate in the nested cross validation if you average performance of all winning models regardless of their individual parameter sets.


What's the difference?

We possibly end up with different final models taking those 2 approaches:

  • the final model in approach 1 will be train_naked (all data, hyperparameters from optimization)
  • whereas approach 2 will use train_tuned (all data) and - as that runs the hyperparameter optimization again on the larger data set - this may end up with a different set of hyperparameters.

But again the same logic applies: if we find that the final model has substantially different parameters from the cross validation surrogate models, that's a symptom of assumption 1 being violated. So IMHO, again we do not have a conflict but rather a check on whether our (implicit) assumptions are justified. And if they aren't, we anyways should not bet too much on having a good estimate of the performance of that final model.


I have the impression (also from seeing the number of similar questions/confusions here on CV) that many people think of nested cross validation doing approach 1. But generalization error is usually estimated according to approach 2, so that's the way to go for the final model as well.


Iris example

Summary: The optimization is basically pointless. The available sample size does not allow distinctions between the performance of any of the parameter sets here.

From the application point of view, however, the conclusion is that it doesn't matter which of the 4 parameter sets you choose - which isn't all that bad news: you found a comparatively stable plateau of parameters. Here comes the advantage of the proper nested validation of the tuned model: while you're not able to claim that the it is the optimal model, your're still able to claim that the model built on the whole data using approach 2 will have about 97 % accuracy (95 % confidence interval for 145 correct out of 150 test cases: 92 - 99 %)

Note that also approach 1 isn't as far off as it seems - see below: your optimization accidentally missed a comparatively clear "winner" because of ties (that's actually another very telltale symptom of the sample size problem).

While I'm not deep enough into SVMs to "see" that C = 1 should be a good choice here, I'd go with the more restrictive linear kernel. Also, as you did the optimization, there's nothing wrong with choosing the winning parameter set even if you are aware that all parameter sets lead to practically equal performance.

In future, however, consider whether your experience yields rough guesstimates of what performance you can expect and roughly what model would be a good choice. Then build that model (with manually fixed hyperparameters) and calculate a confidence interval for its performance. Use this to decide whether trying to optimize is sensible at all. (I may add that I'm mostly working with data where getting 10 more independent cases is not easy - if you are in a field with large independent sample sizes, things look much better for you)

long version:

As for the example results on the iris data set. iris has 150 cases, SVM with a grid of 2 x 2 parameters (2 kernels, 2 orders of magnitude for the penalty C) are considered.

The inner loop has splits of 129 (2x) and 132 (6x) cases. The "best" parameter set is undecided between linear or rbf kernel, both with C = 1. However, the inner test accuracies are all (including the always loosing C = 10) within 94 - 98.5 % observed accuracy. The largest difference we have in one of the splits is 3 vs. 8 errors for rbf with C = 1 vs. 10.

There's no way this is a significant difference. I don't know how to extract the predictions for the individual cases in the CV, but even assuming that the 3 errors were shared, and the C = 10 model made additional 5 errors:

> table (rbf1, rbf10)
         rbf10
rbf1      correct wrong
  correct     124     5
  wrong         0     3

> mcnemar.exact(rbf1, rbf10)

    Exact McNemar test (with central confidence intervals)

data:  rbf1 and rbf10
b = 5, c = 0, p-value = 0.0625
alternative hypothesis: true odds ratio is not equal to 1

Remember that there are 6 pairwise comparisons in the 2 x 2 grid, so we'd need to correct for multiple comparisons as well.


Approach 1

In 3 of the 4 outer splits where rbf "won" over the linear kernel, they actually had the same estimated accuracy (I guess min in case of ties returns the first suitable index).

Changing the grid to params = {'kernel':['linear', 'rbf'],'C':[1,10]} yields

({'kernel': 'linear', 'C': 1}, 0.95238095238095233, 0.97674418604651159)
({'kernel': 'rbf', 'C': 1}, 0.95238095238095233, 0.98449612403100772)
({'kernel': 'linear', 'C': 1}, 1.0, 0.97727272727272729)
({'kernel': 'linear', 'C': 1}, 0.94444444444444442, 0.98484848484848486)
({'kernel': 'linear', 'C': 1}, 0.94444444444444442, 0.98484848484848486)
({'kernel': 'linear', 'C': 1}, 1.0, 0.98484848484848486)
({'kernel': 'linear', 'C': 1}, 1.0, 0.96212121212121215)

Approach 2:

Here, clf is your final model. With random_state = 2, rbf with C = 1 wins:

In [310]: clf.grid_scores_
[...snip warning...]
Out[310]: 
[mean: 0.97333, std: 0.00897, params: {'kernel': 'linear', 'C': 1},
 mean: 0.98000, std: 0.02773, params: {'kernel': 'rbf', 'C': 1},
 mean: 0.96000, std: 0.03202, params: {'kernel': 'linear', 'C': 10},
 mean: 0.95333, std: 0.01791, params: {'kernel': 'rbf', 'C': 10}]

(happens about 1 in 5 times, 1 in 6 times linear and rbf with C = 1 are tied on rank 1)

Related Question