The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of the cross-validation procedure. The best thing to do is to view feature selection, meta/hyper-parameter setting and optimising the parameters as integral parts of model fitting and never do any one of these steps without doing the other two.
The optimistic bias that can be introduced by departing from that recipe can be surprisingly large, as demonstrated by Cawley and Talbot, where the bias introduced by an apparently benign departure was larger than the difference in performance between competing classifiers. Worse still biased protocols favours bad models most strongly, as they are more sensitive to the tuning of hyper-parameters and hence are more prone to over-fitting the model selection criterion!
Answers to specific questions:
The procedure in step 1 is valid because feature selection is performed separately in each fold, so what you are cross-validating is whole procedure used to fit the final model. The cross-validation estimate will have a slight pessimistic bias as the dataset for each fold is slightly smaller than the whole dataset used for the final model.
For 2, as cross-validation is used to select the model parameters then you need to repeat that procedure independently in each fold of the cross-validation used for performance estimation, you you end up with nested cross-validation.
For 3, essentially, yes you need to do nested-nested cross-validation. Essentially you need to repeat in each fold of the outermost cross-validation (used for performance estimation) everything you intend to do to to fit the final model.
For 4 - yes, if you have a separate hold-out set, then that will give an unbiased estimate of performance without needing an additional cross-validation.
How do I choose a model from this [outer cross validation] output?
Short answer: You don't.
Treat the inner cross validation as part of the model fitting procedure. That means that the fitting including the fitting of the hyper-parameters (this is where the inner cross validation hides) is just like any other model esitmation routine.
The outer cross validation estimates the performance of this model fitting approach. For that you use the usual assumptions
- the $k$ outer surrogate models are equivalent to the "real" model built by
model.fitting.procedure
with all data.
- Or, in case 1. breaks down (pessimistic bias of resampling validation), at least the $k$ outer surrogate models are equivalent to each other.
This allows you to pool (average) the test results. It also means that you do not need to choose among them as you assume that they are basically the same.
The breaking down of this second, weaker assumption is model instability.
Do not pick the seemingly best of the $k$ surrogate models - that would usually be just "harvesting" testing uncertainty and leads to an optimistic bias.
So how can I use nested CV for model selection?
The inner CV does the selection.
It looks to me that selecting the best model out of those K winning models would not be a fair comparison since each model was trained and tested on different parts of the dataset.
You are right in that it is no good idea to pick one of the $k$ surrogate models. But you are wrong about the reason. Real reason: see above. The fact that they are not trained and tested on the same data does not "hurt" here.
- Not having the same testing data: as you want to claim afterwards that the test results generalize to never seen data, this cannot make a difference.
- Not having the same training data:
- if the models are stable, this doesn't make a difference: Stable here means that the model does not change (much) if the training data is "perturbed" by replacing a few cases by other cases.
- if the models are not stable, three considerations are important:
- you can actually measure whether and to which extent this is the case, by using iterated/repeated $k$-fold cross validation. That allows you to compare cross validation results for the same case that were predicted by different models built on slightly differing training data.
- If the models are not stable, the variance observed over the test results of the $k$-fold cross validation increases: you do not only have the variance due to the fact that only a finite number of cases is tested in total, but have additional variance due to the instability of the models (variance in the predictive abilities).
- If instability is a real problem, you cannot extrapolate well to the performance for the "real" model.
Which brings me to your last question:
What types of analysis /checks can I do with the scores that I get from the outer K folds?
- check for stability of the predictions (use iterated/repeated cross-validation)
check for the stability/variation of the optimized hyper-parameters.
For one thing, wildly scattering hyper-parameters may indicate that the inner optimization didn't work. For another thing, this may allow you to decide on the hyperparameters without the costly optimization step in similar situations in the future. With costly I do not refer to computational resources but to the fact that this "costs" information that may better be used for estimating the "normal" model parameters.
check for the difference between the inner and outer estimate of the chosen model. If there is a large difference (the inner being very overoptimistic), there is a risk that the inner optimization didn't work well because of overfitting.
update @user99889's question: What to do if outer CV finds instability?
First of all, detecting in the outer CV loop that the models do not yield stable predictions in that respect doesn't really differ from detecting that the prediciton error is too high for the application. It is one of the possible outcomes of model validation (or verification) implying that the model we have is not fit for its purpose.
In the comment answering @davips, I was thinking of tackling the instability in the inner CV - i.e. as part of the model optimization process.
But you are certainly right: if we change our model based on the findings of the outer CV, yet another round of independent testing of the changed model is necessary.
However, instability in the outer CV would also be a sign that the optimization wasn't set up well - so finding instability in the outer CV implies that the inner CV did not penalize instability in the necessary fashion - this would be my main point of critique in such a situation. In other words, why does the optimization allow/lead to heavily overfit models?
However, there is one peculiarity here that IMHO may excuse the further change of the "final" model after careful consideration of the exact circumstances: As we did detect overfitting, any proposed change (fewer d.f./more restrictive or aggregation) to the model would be in direction of less overfitting (or at least hyperparameters that are less prone to overfitting). The point of independent testing is to detect overfitting - underfitting can be detected by data that was already used in the training process.
So if we are talking, say, about further reducing the number of latent variables in a PLS model that would be comparably benign (if the proposed change would be a totally different type of model, say PLS instead of SVM, all bets would be off), and I'd be even more relaxed about it if I'd know that we are anyways in an intermediate stage of modeling - after all, if the optimized models are still unstable, there's no question that more cases are needed. Also, in many situations, you'll eventually need to perform studies that are designed to properly test various aspects of performance (e.g. generalization to data acquired in the future).
Still, I'd insist that the full modeling process would need to be reported, and that the implications of these late changes would need to be carefully discussed.
Also, aggregation including and out-of-bag analogue CV estimate of performance would be possible from the already available results - which is the other type of "post-processing" of the model that I'd be willing to consider benign here. Yet again, it then would have been better if the study were designed from the beginning to check that aggregation provides no advantage over individual predcitions (which is another way of saying that the individual models are stable).
Update (2019): the more I think about these situations, the more I come to favor the "nested cross validation apparently without nesting" approach.
Best Answer
Let me add a few points to the nice answers that are already here:
Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes.
I therefore recommend to repeat any nested k-fold cross validation.
Better report "The statistics of our estimator, e.g. its confidence interval, variance, mean, etc. on the full sample (in this case the CV sample).":
Sure. However, you need to be aware of the fact that you will not (easily) be able to estimate the confidence interval by the cross validation results alone. The reason is that, however much you resample, the actual number of cases you look at is finite (and usually rather small - otherwise you'd not bother about these distinctions).
See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105.
However, in some situations you can nevertheless make estimations of the variance: With repeated k-fold cross validation, you can get an idea whether model instability does play a role. And this instability-related variance is actually the part of the variance that you can reduce by repeated cross-validation. (If your models are perfectly stable, each repetition/iteration of the cross validation will have exactly the same predictions for each case. However, you still have variance due to the actual choice/composition of your data set). So there is a limit to the lower variance of repeated k-fold cross validation. Doing more and more repetitions/iterations does not make sense, as the variance caused by the fact that in the end only $n$ real cases were tested is not affected.
The variance caused by the fact that in the end only $n$ real cases were tested can be estimated for some special cases, e.g. the performance of classifiers as measured by proportions such as hit rate, error rate, sensitivity, specificity, predictive values and so on: they follow binomial distributions Unfortunately, this means that they have huge variance $\sigma^2 (\hat p) = \frac{1}{n} p (1 - p)$ with $p$ the true performance value of the model, $\hat p$ the observed, and $n$ the sample size in the denominator of the fraction. This has the maximum for $p = 0.5$. You can also calculate confidence intervals starting from the observation. (@Frank Harrell will comment that these are no proper scoring rules, so you anyways shouldn't use them - which is related to the huge variance). However, IMHO they are useful for deriving conservative bounds (there are better scoring rules, and the bad behaviour of these fractions is a worst-case limit for the better rules),
see e.g. C. Beleites, R. Salzer and V. Sergo: Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues, Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22.
So this lets me turn around your argumentation against the hold-out:
Not necessarily (if compared to k-fold) - but you have to trade off: small hold-out set (e.g. $\frac{1}{k}$ of the sample => low bias (≈ same as k-fold cv), high variance (> k-fold cv, roughly by a factor of k).
Usually, yes. However, it is also good to keep in mind that there are important types of errors (such as drift) that cannot be measured/detected by resampling validation.
See e.g. Esbensen, K. H. and Geladi, P. Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, 2010, 24, 168-187
I'd say no to this: it doesn't matter how the model training uses its $\frac{k - 1}{k} n$ training samples, as long as the surrogate models and the "real" model use them in the same way. (I look at the inner cross-validation / estimation of hyper-parameters as part of the model set-up).
Things look different if you compare surrogate models which are trained including hyper-parameter optimization to "the" model which is trained on fixed hyper-parameters. But IMHO that is generalizing from $k$ apples to 1 orange.
Whether this does make a difference depends on the instability of the (surrogate) models, see above. For stable models it is irrelevant. So may be whether you do 1000 or 100 outer repetitions/iterations.
And this paper definitively belongs onto the reading list on this topic: Cawley, G. C. and Talbot, N. L. C. On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 2010, 11, 2079-2107