Cross Validation – Variance Estimates in K-Fold Cross-Validation

cross-validationmachine learning

K-fold cross-validation can be used to estimate the generalization capability of a given classifier. Can I (or should I) also compute a pooled variance from all validation runs in order to obtain a better estimate of its variance?

If not, why?

I have found papers which do use the pooled standard deviation across cross-validation runs. I have also found papers explicitly stating there is no universal estimator for the validation variance. However, I have also found papers showing some variance estimators for the generalization error (I am still reading and trying to comprehend this one). What do people really do (or report) in practice?

EDIT: When CV is used to measure the crude classification error (i.e. either a sample has been labeled correctly or it hasn't; e.g. true or false) then it may not make sense to talk about a pooled variance. However, I am talking about the case in which the statistic we are estimating does have a variance defined. So, for a given fold, we can end up with both a value for the statistic and a variance estimate. It does not seems right to discard this information and consider only the average statistic. And while I am aware I could build a variance estimate using bootstrap methods, (if I am not very wrong) doing so would still ignore the fold variances and take only the statistic estimates into consideration (plus requiring much more computation power).

Best Answer

Very interesting question, I'll have to read the papers you give... But maybe this will start us in direction of an answer:

I usually tackle this problem in a very pragmatic way: I iterate the k-fold cross validation with new random splits and calculate performance just as usual for each iteration. The overall test samples are then the same for each iteration, and the differences come from different splits of the data.

This I report e.g. as the 5th to 95th percentile of observed performance wrt. exchanging up to $\frac{n}{k} - 1$ samples for new samples and discuss it as a measure for model instability.

Side note: I anyways cannot use formulas that need the sample size. As my data are clustered or hierarchical in structure (many similar but not repeated measurements of the same case, usually several [hundred] different locations of the same specimen) I don't know the effective sample size.

comparison to bootstrapping:

  • iterations use new random splits.

  • the main difference is resampling with (bootstrap) or without (cv) replacement.

  • computational cost is about the same, as I'd choose no of iterations of cv $\approx$ no of bootstrap iterations / k, i.e. calculate the same total no of models.

  • bootstrap has advantages over cv in terms of some statistical properties (asymptotically correct, possibly you need less iterations to obtain a good estimate)

  • however, with cv you have the advantage that you are guaranteed that

    • the number of distinct training samples is the same for all models (important if you want to calculate learning curves)
    • each sample is tested exactly once in each iteration
  • some classification methods will discard repeated samples, so bootstrapping does not make sense

Variance for the performance

short answer: yes it does make sense to speak of variance in situation where only {0,1} outcomes exist.

Have a look at the binomial distribution (k = successes, n = tests, p = true probability for success = average k / n):

$\sigma^2 (k) = np(1-p)$

The variance of proportions (such as hit rate, error rate, sensitivity, TPR,..., I'll use $p$ from now on and $\hat p$ for the observed value in a test) is a topic that fills whole books...

  • Fleiss: Statistical Methods for Rates and Proportions
  • Forthofer and Lee: Biostatistics has a nice introduction.

Now, $\hat p = \frac{k}{n}$ and therefore:

$\sigma^2 (\hat p) = \frac{p (1-p)}{n}$

This means that the uncertainty for measuring classifier performance depends only on the true performance p of the tested model and the number of test samples.

In cross validation you assume

  1. that the k "surrogate" models have the same true performance as the "real" model you usually build from all samples. (The breakdown of this assumption is the well-known pessimistic bias).

  2. that the k "surrogate" models have the same true performance (are equivalent, have stable predictions), so you are allowed to pool the results of the k tests.
    Of course then not only the k "surrogate" models of one iteration of cv can be pooled but the ki models of i iterations of k-fold cv.

Why iterate?

The main thing the iterations tell you is the model (prediction) instability, i.e. variance of the predictions of different models for the same sample.

You can directly report instability as e.g. the variance in prediction of a given test case regardless whether the prediction is correct or a bit more indirectly as the variance of $\hat p$ for different cv iterations.

And yes, this is important information.

Now, if your models are perfectly stable, all $n_{bootstrap}$ or $k \cdot n_{iter.~cv}$ would produce exactly the same prediction for a given sample. In other words, all iterations would have the same outcome. The variance of the estimate would not be reduced by the iteration (assuming $n - 1 \approx n$). In that case, assumption 2 from above is met and you are subject only to $\sigma^2 (\hat p) = \frac{p (1-p)}{n}$ with n being the total number of samples tested in all k folds of the cv.
In that case, iterations are not needed (other than for demonstrating stability).

You can then construct confidence intervals for the true performance $p$ from the observed no of successes $k$ in the $n$ tests. So, strictly, there is no need to report the variance uncertainty if $\hat p$ and $n$ are reported. However, in my field, not many people are aware of that or even have an intuitive grip on how large the uncertainty is with what sample size. So I'd recommend to report it anyways.

If you observe model instability, the pooled average is a better estimate of the true performance. The variance between the iterations is an important information, and you could compare it to the expected minimal variance for a test set of size n with true performance average performance over all iterations.