My paper in JMLR addresses this exact question, and demonstrates why the procedure suggested in the question (or at least one very like it) results in optimistically biased performance estimates:
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. (www)
The key thing to remember is that cross-validation is a technique for estimating the generalisation performance for a method of generating a model, rather than of the model itself. So if choosing kernel parameters is part of the process of generating the model, you need to cross-validate the model selection process as well, otherwise you will end up with an optimistically biased performance estimate (as will happen with the procedure you propose).
Assume you have a function fit_model, which takes in a dataset consisting of attributes X and desired responses Y, and which returns the fitted model for that dataset, including the tuning of hyper-parameters (in this case kernel and regularisation parameters). This tuning of hyper-parameters can be performed in many ways, for example minimising the cross-validation error over X and Y.
Step 1 - Fit the model to all available data, using the function fit_model. This gives you the model that you will use in operation or deployment.
Step 2 - Performance evaluation. Perform repeated cross-validation using all available data. In each fold, the data are partitioned into a training set and a test set. Fit the model using the training set (record hyper-parameter values for the fitted model) and evaluate performance on the test set. Use the mean over all of the test sets as a performance estimate (and perhaps look at the spread of values as well).
Step 3 - Variability of hyper-parameter settings - perform analysis of hyper-parameter values collected in step 3. However I should point out that there is nothing special about hyper-parameters, they are just parameters of the model that have been estimated (indirectly) from the data. They are treated as hyper-parameters rather than parameters for computational/mathematical convenience, but this doesn't have to be the case.
The problem with using cross-validation here is that the training and test data are not independent samples (as they share data) which means that the estimate of the variance of the performance estimate and of the hyper-parameters is likely to be biased (i.e. smaller than it would be for genuinely independent samples of data in each fold). Rather than repeated cross-validation, I would probably use bootstrapping instead and bag the resulting models if this was computationally feasible.
The key point is that to get an unbiased performance estimate, whatever procedure you use to generate the final model (fit_model) must be repeated in its entirety independently in each fold of the cross-validation procedure.
I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me.
Here's how I understand your description: in step 7, you want to compare the hold-out performance with the results of a cross validation embracing steps 4 - 6. (so yes, that would be a nested setup).
The main points why I don't think this comparison makes much sense are:
This comparison cannot detect two of the main sources of overoptimistic validation results I encounter in practice:
data leaks (dependence) between training and test data which is caused by a hierarchical (aka clustered) data structure, and which is not accounted for in the splitting. In my field, we have typically multiple (sometimes thousands) of readings (= rows in the data matrix) of the same patient or biological replicate of an experiment. These are not independent, so the validation splitting needs to be done at patient level. However, such a data leak occurs, you'll have it both in the splitting for the hold out set and in the cross validation splitting. Hold-out wold then be just as optimistically biased as cross validation.
Preprocessing of the data done on the whole data matrix, where the calculations are not independent for each row but many/all rows are used to calculation parameters for the preprocessing. Typical examples would be e.g. a PCA projection before the "actual" classification.
Again, that would affect both your hold-out and the outer cross validation, so you cannot detect it.
For the data I work with, both errors can easily cause the fraction of misclassifications to be underestimated by an order of magnitude!
If you are restricted to this counted fraction of test cases type of performance, model comparisons need either extremely large numbers of test cases or ridiculously large differences in true performance. Comparing 2 classifiers with unlimited training data may be a good start for further reading.
However, comparing the model quality the inner cross validation claims for the "optimal" model and the outer cross validation or hold out validation does make sense: if the discrepancy is high, it is questionable whether your grid search optimization did work (you may have skimmed variance due to the high variance of the performance measure). This comparison is easier in that you can spot trouble if you have the inner estimate being ridiculously good compared to the other - if it isn't, you don't need to worry that much about your optimization. But in any case, if your outer (7) measurement of the performance is honest and sound, you at least have a useful estimate of the obtained model, whether it is optimal or not.
IMHO measuring the learning curve is yet a different problem. I'd probably deal with that separately, and I think you need to define more clearly what you need the learning curve for (do you need the learning curve for a data set of the given problem, data, and classification method or the learning curve for this data set of the given problem, data, and classification mehtod), and a bunch of further decisions (e.g. how to deal with the model complexity as function of the training sample size? Optimize all over again, use fixed hyperparameters, decide on function to fix hyperparameters depending on training set size?)
(My data usually has so few independent cases to get the measurement of the learning curve sufficiently precise to use it in practice - but you may be better of if your 1200 rows are actually independent)
update: What is "wrong" with the the scikit-learn example?
First of all, nothing is wrong with nested cross validation here. Nested validation is of utmost importance for data-driven optimization, and cross validation is a very powerful approaches (particularly if iterated/repeated).
Then, whether anything is wrong at all depends on your point of view: as long as you do an honest nested validation (keeping the outer test data strictly independent), the outer validation is a proper measure of the "optimal" model's performance. Nothing wrong with that.
But several things can and do go wrong with grid search of these proportion-type performance measures for hyperparameter tuning of SVM. Basically they mean that you may (probably?) cannont rely on the optimization. Nevertheless, as long as your outer split was done properly, even if the model is not the best possible, you have an honest estimate of the performance of the model you got.
I'll try to give intuitive explanations why the optimization may be in trouble:
Mathematically/statisticaly speaking, the problem with the proportions is that measured proportions $\hat p$ are subject to a huge variance due to finite test sample size $n$ (depending also on the true performance of the model, $p$):
$Var (\hat p) = \frac{p (1 - p)}{n}$
You need ridiculously huge numbers of cases (at least compared to the numbers of cases I can usually have) in order to achieve the needed precision (bias/variance sense) for estimating recall, precision (machine learning performance sense). This of course applies also to ratios you calculate from such proportions. Have a look at the confidence intervals for binomial proportions. They are shockingly large! Often larger than the true improvement in performance over the hyperparameter grid.
And statistically speaking, grid search is a massive multiple comparison problem: the more points of the grid you evaluate, the higher the risk of finding some combination of hyperparameters that accidentally looks very good for the train/test split you are evaluating. This is what I mean with skimming variance. The well known optimistic bias of the inner (optimization) validation is just a symptom of this variance skimming.
Intuitively, consider a hypothetical change of a hyperparameter, that slowly causes the model to deteriorate: one test case moves towards the decision boundary. The 'hard' proportion performance measures do not detect this until the case crosses the border and is on the wrong side. Then, however, they immediately assign a full error for an infinitely small change in the hyperparameter.
In order to do numerical optimization, you need the performance measure to be well behaved. That means: neither the jumpy (not continously differentiable) part of the proportion-type performance measure nor the fact that other than that jump, actually occuring changes are not detected are suitable for the optimization.
Proper scoring rules are defined in a way that is particularly suitable for optimization. They have their global maximum when the predicted probabilities match the true probabilities for each case to belong to the class in question.
For SVMs you have the additional problem that not only the performance measures but also the model reacts in this jumpy fashion: small changes of the hyperparameter will not change anything. The model changes only when the hyperparameters are changes enough to cause some case to either stop being support vector or to become support vector. Again, such models are hard to optimize.
Literature:
- Brown, L.; Cai, T. & DasGupta, A.: Interval Estimation for a Binomial Proportion, Statistical Science, 16, 101-133 (2001).
- Cawley, G. C. & Talbot, N. L. C.: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 11, 2079-2107 (2010).
Gneiting, T. & Raftery, A. E.: Strictly Proper Scoring Rules, Prediction, and Estimation, Journal of the American Statistical Association, 102, 359-378 (2007). DOI: 10.1198/016214506000001437
Brereton, R.: Chemometrics for pattern recognition, Wiley, (2009).
points out the jumpy behaviour of the SVM as function of the hyperparameters.
Update II: Skimming variance
what you can afford in terms of model comparison obviously depends on the number of independent cases.
Let's make some quick and dirty simulation about the risk of skimming variance here:
scikit.learn
says that they have 1797 are in the digits
data.
- assume that 100 models are compared, e.g. a $10 \times 10$ grid for 2 parameters.
- assume that both parameter (ranges) do not affect the models at all,
i.e., all models have the same true performance of, say, 97 % (typical performance for the digits
data set).
Run $10^4$ simulations of "testing these models" with sample size = 1797 rows in the digits
data set
p.true = 0.97 # hypothetical true performance for all models
n.models = 100 # 10 x 10 grid
n.rows = 1797 # rows in scikit digits data
sim.test <- replicate (expr= rbinom (n= nmodels, size= n.rows, prob= p.true),
n = 1e4)
sim.test <- colMaxs (sim.test) # take best model
hist (sim.test / n.rows,
breaks = (round (p.true * n.rows) : n.rows) / n.rows + 1 / 2 / n.rows,
col = "black", main = 'Distribution max. observed performance',
xlab = "max. observed performance", ylab = "n runs")
abline (v = p.outer, col = "red")
Here's the distribution for the best observed performance:
The red line marks the true performance of all our hypothetical models. On average, we observe only 2/3 of the true error rate for the seemingly best of the 100 compared models (for the simulation we know that they all perform equally with 97% correct predictions).
This simulation is obviously very much simplified:
- In addition to the test sample size variance there is at least the variance due to model instability, so we're underestimating the variance here
- Tuning parameters affecting the model complexity will typically cover parameter sets where the models are unstable and thus have high variance.
- For the UCI digits from the example, the original data base has ca. 11000 digits written by 44 persons. What if the data is clustered according to the person who wrote? (I.e. is it easier to recognize an 8 written by someone if you know how that person writes, say, a 3?) The effective sample size then may be as low as 44.
- Tuning model hyperparameters may lead to correlation between the models (in fact, that would be considered well behaved from a numerical optimization perspective). It is difficult to predict the influence of that (and I suspect this is impossible without taking into account the actual type of classifier).
In general, however, both low number of independent test cases and high number of compared models increase the bias. Also, the Cawley and Talbot paper gives empirical observed behaviour.
Best Answer
The benefit of determining the contents of each fold at the beginning rather than re-sampling is that you avoid bias by possibly selecting a single observation for more than one training or testing set. This is only a benefit if you care that no records are over-represented in your validation.
If the argument is that new folds should be generated without replacement after each fold has been tested, it can be shown that the likelihood of a single observation to appear in each fold is unaffected and they are therefore equivalent approaches.
Stratified methods most commonly used in cases where classes are unbalanced - to avoid, for example, folds where there are no (or very few, or just inefficiently few) positive examples in a sparse binary classification problem. This applies both to StratifiedShuffleSplit and StratifiedKFold methods.
In order to determine which to use between K-Fold and ShuffleSplit, though, we'll have to understand some key differences between the methods.
In K-fold, the model is trained on each iteration at a proportion of the data set equal to $\frac{k-1}{k}$, i.e. for $k=5$ the model is trained on 80% of the data at each iteration and for $k=10$ the model is trained on 90% of the data. There are $k$ training iterations in the algorithm.
In ShuffleSplit, the model is trained at each iteration on a defined
train_size
. The default size of the training set for the scikit-learn implementation is 90%. The number of iterations is parameterized (n_splits
).You can therefore configure two validation strategies that have the same number of training runs and are trained on the same proportion of the data, I.e. K-fold validation with K=10 and ShuffleSplit with
train_size = .9
andn_splits = 10
.No difference here.
K-Fold is guaranteed to both train and test on every observation of the dataset an equal number of times ($k-1$ and 1 times, respectively).
ShuffleSplit does not guarantee this proportion- by re-sampling at each iteration, duplicate members of the test set can be selected twice, or several times. Some observations may not be represented in the test set.
This is a point for using K-Fold.
Some other discussion has brought up training time as a point in favor of ShuffleSplit- that is, you can configure ShuffleSplit to run with less demanding coverage (i.e.
train_size = .8
,n_splits = 5
). This is only a strong argument for complexities that can't be approximated by adjusting the parameter $k$- as $k=5$ for the parameters mentioned above for ShuffleSplit.For example, K-fold can't approximate
train_size = .8
, n_splits =3
. The tradeoff is that each configuration of ShuffleSplit that would be more computationally efficient than K-Fold also provides less complete validation coverage- in this example (train_size = .8
, n_splits =3
), the model could never be tested on more than 60% of the data, and would almost always be tested on less than that.TL;DR-
K-Fold is generally better except in edge cases where the computational demand of K-Fold isn't supported by a need for comprehensive cross-validation coverage.