Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). This is bad in a model selection criterion as it means the model selection criterion can be optimised in ways that merely exploit the random variation in the particular sample of data, rather than making genuine improvements in performance, i.e. you are more likely to over-fit the model selection criterion. The reason leave-one-out cross-validation is used in practice is that for many models it can be evaluated very cheaply as a by-product of fitting the model.
If computational expense is not primarily an issue, a better approach is to perform repeated k-fold cross-validation, where the k-fold cross-validation procedure is repeated with different random partitions into k disjoint subsets each time. This reduces the variance.
If you have only 20 patterns, it is very likely that you will experience over-fitting the model selection criterion, which is a much neglected pitfall in statistics and machine learning (shameless plug: see my paper on the topic). You may be better off choosing a relatively simple model and try not to optimise it very aggressively, or adopt a Bayesian approach and average over all model choices, weighted by their plausibility. IMHO optimisation is the root of all evil in statistics, so it is better not to optimise if you don't have to, and to optimise with caution whenever you do.
Note also if you are going to perform model selection, you need to use something like nested cross-validation if you also need a performance estimate (i.e. you need to consider model selection as an integral part of the model fitting procedure and cross-validate that as well).
Setting NaNs to the mean of the training or test set is ok?
You need to define a procedure that you always follow. I see two valid options here:
- either use some value (e.g. mean) calculated for that subject (see also below)
- or some value calculated for the training set, basically a hyperparameter "value to be used for replacing NAs". This should not be calculated from the whole test set (independent testing also means that no parameters calculated from other test subjects should be used: the processing should not depend on the composition of the test set).
Edit:
Which method to follow (replace by value computed within subject or by value computed within training set) should IMHO be decided from the knowledge about the application and the data, we cannot tell you more than very general guidelines here.
Why not by value computed within test set?: That would mean that the value used to replace NA
s in test subject A depends on whether subject A is tested together with subject B or subject C – which doesn't seem to be a desirable or sensible behaviour to me.
You may also want to look up "Imputation" which is the general term for techniques that fill in missing values.
Centering and scaling (standardization): if you have "external" (scientific) knowledge that suggests that a standardization within the subjects should take place, then go ahead with that. Whether this is sensible depends on your application and data, so we cannot answer this question.
For a more general discussion of centering and standardization, see e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? and When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
Now within each outer fold I plan to tune a classifier's parameter with help of another cross-validation.
With 4 subjects you probably won't be able to compare classifiers anyways: the uncertainty due to having only 4 test cases is far too high. Can't you fix this parameter by experience with similar data?
To illustrate the problem: assume you observe 4 correct classifications out of 4 test subjects. That gives you a point estimate of 100% correct predictions. If you look at confidence interval calculations for this, you get e.g. for the Agresti-Coull method a 95% ci of 45 - 105% (obviously not very precise with the small sample size), Bayes method with uniform prior makes it 55 - 100%. In any case it means that even if you observe perfect test results, it is not quite clear whether you can claim that the model is actually better than guessing. As long as you do not need to fear that fixing the parameter beforehand will produce a model that is clearly worse than guessing, you anyways cannot measure improvements in the practically important range.
The situation may be less drastic if you optimize e.g. Brier score but with 4 subjects I'd suspect that you still do not reach the precision you need for the expected improvement during the optimization.
Edit: Unfortunately, while 20 subjects are far more than 4, from a classifier validation statistics point of view, 20 is still very few.
We recently concluded that if you need to stick with the frequently used proportions for characterizing your classifier, at least in our field you cannot expect to have a useful precision in the test results with less than 75 - 100 test subjects (in the denominator of the proportion!). Again, you may be better off if you can switch to e.g. Brier's score, and with a paired design
for classifier comparison, but I'd call it lucky if that gains you a factor of 5 in sample size.
You can find our thoughts here: Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
AFAIK, dealing with the random uncertainty on test results during classifier optimization is an unsolved problem. (If not, I'd be extremely interested to papers about the solution!)
So my recommendation would be to do a preliminary experiment/analysis at the end of which you try to estimate the random uncertainty on the comparison results. If these do not allow to optimize (which I'd unfortunately expect to be the outcome), report this result and argue that in consequence you do not have any choice at the moment but fixing the hyper-parameters to some sensible (though not optimized) value.
Does the inner cross-validation necessarily need to be leave-one-subject-out as well?
If you do inner cross validation it would be better to do it subject-wise as well: without this, you'll get overly optimistic inner results. Which would not a problem iff the bias were constant. However, it usually isn't and you have the additional problem that due to the random uncertainty together with the optimistic bias you may observe many models that seem to be perfect. Among these you cannot distinguish (after all, they all seem to be perfect) which can completely mess up the optimization.
Again, with so few subjects I'd avoid this inner optimization and fix the parameter.
Best Answer
[TL:DR] A summary of recent posts and debates (July 2018)
This topic has been widely discussed both on this site, and in the scientific literature, with conflicting views, intuitions and conclusions. Back in 2013 when this question was first asked, the dominant view was that LOOCV leads to larger variance of the expected generalization error of a training algorithm producing models out of samples of size $n(K−1)/K$.
This view, however, appears to be an incorrect generalization of a special case and I would argue that the correct answer is: "it depends..."
Paraphrasing Yves Grandvalet the author of a 2004 paper on the topic I would summarize the intuitive argument as follows:
Experimental simulations from myself and others on this site, as well as those of researchers in the papers linked below will show you that there is no universal truth on the topic. Most experiments have monotonically decreasing or constant variance with $K$, but some special cases show increasing variance with $K$.
The rest of this answer proposes a simulation on a toy example and an informal literature review.
[Update] You can find here an alternative simulation for an unstable model in the presence of outliers.
Simulations from a toy example showing decreasing / constant variance
Consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve. We expect this model to fare poorly for small datasets due to overfitting, as shown by the learning curve.
Note that we plot 1 - MSE here to reproduce the illustration from ESLII page 243
Methodology
You can find the code for this simulation here. The approach was the following:
Impact of $K$ on the Bias and Variance of the MSE across $i$ datasets.
Left Hand Side: Kfolds for 200 data points, Right Hand Side: Kfolds for 40 data points
Standard Deviation of MSE (across data sets i) vs Kfolds
From this simulation, it seems that:
An informal literature review
The following three papers investigate the bias and variance of cross validation
Kohavi 1995
This paper is often refered to as the source for the argument that LOOC has higher variance. In section 1:
This statement is source of much confusion, because it seems to be from Efron in 1983, not Kohavi. Both Kohavi's theoretical argumentations and experimental results go against this statement:
Corollary 2 ( Variance in CV)
Experiment In his experiment, Kohavi compares two algorithms: a C4.5 decision tree and a Naive Bayes classifier across multiple datasets from the UC Irvine repository. His results are below: LHS is accuracy vs folds (i.e. bias) and RHS is standard deviation vs folds
In fact, only the decision tree on three data sets clearly has higher variance for increasing K. Other results show decreasing or constant variance.
Finally, although the conclusion could be worded more strongly, there is no argument for LOO having higher variance, quite the opposite. From section 6. Summary
Zhang and Yang
The authors take a strong view on this topic and clearly state in Section 7.1
Experimental results Similarly, Zhang's experiments point in the direction of decreasing variance with K, as shown below for the True model and the wrong model for Figure 3 and Figure 5.
The only experiment for which variance increases with $K$ is for the Lasso and SCAD models. This is explained as follows on page 31: