Solved – Is Monte Carlo cross-validation procedure valid

cross-validationmachine learning

I thought K-fold cross-validation consists of the following steps.

  1. Split data randomly into $K$ chunks.
  2. Fit on $K-1$ chunks.
  3. Predict on remaining chunk. Keep predictions.
  4. Repeat 2-3 for all remanining $K-1$ combinations of the $K$ chunks that omit 1 chunk.
  5. Evaluate Loss statistic that compares all predictions to true values.

Now I have seen (xbart in dbarts package) the following procedure:

  1. Split data randomly into $K$ chunks.
  2. Fit on $K-1$ chunks.
  3. Predict on remaining chunk. Evaluate loss statistic and keep.
  4. Repeat 1-3 $N$ times.
  5. Average the $N$ loss statistics or pool in some other way.

Note the difference in steps 4 and 5.

The first procedure is standard and recommended in major text books. The second procedure seems new. I cannot see immediately why not to do it, but it seems not optimal in terms of variance. Are there arguments in favor or against the second procedure?

The second approach is implemented in the package quoted above and I wonder if this is wrong to do.

Best Answer

Short answer: it is neither wrong nor new.


We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to it as we didn't find it used in practice.

Wikipedia refers to the same validation scheme as repeated random sub-sampling validation or Monte Carlo cross validation

From a theory point of view, the concept was of interest to us because

  • it is another interpretation of the same numbers usually referred to as hold-out (just the model the estimate is used for is different: hold-out estimates are used as performance estimate for exactly the model tested, this set or Monte Carlo validation treats the tested model(s) as surrogate model(s) and interprets the very same number as performace estimate for a model built on the whole data set - as it is usually done with cross validation or out-of-bootstrap validation estimates)
  • and it is somewhere in between
    • more common cross validation techniques (resampling with replacement, interpretation as estimate for whole-data model),
    • hold-out (see above, same calculation + numbers, typically without N iterations/repetitions, though and different interpretation)
    • and out-of-bootstrap (the N iterations/repetitions are typical for out-of-bootstrap, but I've never seen this applied to hold-out and it is [unfortunately] rarely done with cross validation).

* Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
The "set validation" error for N = 1 is hidden in fig. 6 (i.e. its bias + variance can be recostructed from the data given but are not explicitly given.)


but it seems not optimal in terms of variance. Are there arguments in favor or against the second procedure?

Well, in the paper above we found the total error (bias² + variance) of out-of-bootstrap and repeated/iterated $k$-fold cross validation to be pretty similar (with oob having somewhat lower variance but higher bias - but we did not follow up to check whether/how much of this trade-off is due resampling with/without replacement and how much is due to the different split ratio of about 1 : 2 for oob).
Keep in mind, though, that I'm talking about accuracy in small sample size situations, where the dominating contributor to variance uncertainty is the same for all resampling schemes: the limited number of true samples for testing, and that is the same for oob, cross validation or set validation. Iterations/repetitions allow you to reduce the variance caused by instability of the (surrogate) models, but not the variance uncertainty due to the limited total sample size.
Thus, assuming that you perform an adequately large number of iterations/repetitions N, I'd not expect practically relevant differences in the performance of these validation schemes.

One validation scheme may fit better with the scenario you try to simulate by the resampling, though.

Related Question