Setting NaNs to the mean of the training or test set is ok?
You need to define a procedure that you always follow. I see two valid options here:
- either use some value (e.g. mean) calculated for that subject (see also below)
- or some value calculated for the training set, basically a hyperparameter "value to be used for replacing NAs". This should not be calculated from the whole test set (independent testing also means that no parameters calculated from other test subjects should be used: the processing should not depend on the composition of the test set).
Edit:
Which method to follow (replace by value computed within subject or by value computed within training set) should IMHO be decided from the knowledge about the application and the data, we cannot tell you more than very general guidelines here.
Why not by value computed within test set?: That would mean that the value used to replace NA
s in test subject A depends on whether subject A is tested together with subject B or subject C – which doesn't seem to be a desirable or sensible behaviour to me.
You may also want to look up "Imputation" which is the general term for techniques that fill in missing values.
Centering and scaling (standardization): if you have "external" (scientific) knowledge that suggests that a standardization within the subjects should take place, then go ahead with that. Whether this is sensible depends on your application and data, so we cannot answer this question.
For a more general discussion of centering and standardization, see e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? and When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
Now within each outer fold I plan to tune a classifier's parameter with help of another cross-validation.
With 4 subjects you probably won't be able to compare classifiers anyways: the uncertainty due to having only 4 test cases is far too high. Can't you fix this parameter by experience with similar data?
To illustrate the problem: assume you observe 4 correct classifications out of 4 test subjects. That gives you a point estimate of 100% correct predictions. If you look at confidence interval calculations for this, you get e.g. for the Agresti-Coull method a 95% ci of 45 - 105% (obviously not very precise with the small sample size), Bayes method with uniform prior makes it 55 - 100%. In any case it means that even if you observe perfect test results, it is not quite clear whether you can claim that the model is actually better than guessing. As long as you do not need to fear that fixing the parameter beforehand will produce a model that is clearly worse than guessing, you anyways cannot measure improvements in the practically important range.
The situation may be less drastic if you optimize e.g. Brier score but with 4 subjects I'd suspect that you still do not reach the precision you need for the expected improvement during the optimization.
Edit: Unfortunately, while 20 subjects are far more than 4, from a classifier validation statistics point of view, 20 is still very few.
We recently concluded that if you need to stick with the frequently used proportions for characterizing your classifier, at least in our field you cannot expect to have a useful precision in the test results with less than 75 - 100 test subjects (in the denominator of the proportion!). Again, you may be better off if you can switch to e.g. Brier's score, and with a paired design
for classifier comparison, but I'd call it lucky if that gains you a factor of 5 in sample size.
You can find our thoughts here: Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
AFAIK, dealing with the random uncertainty on test results during classifier optimization is an unsolved problem. (If not, I'd be extremely interested to papers about the solution!)
So my recommendation would be to do a preliminary experiment/analysis at the end of which you try to estimate the random uncertainty on the comparison results. If these do not allow to optimize (which I'd unfortunately expect to be the outcome), report this result and argue that in consequence you do not have any choice at the moment but fixing the hyper-parameters to some sensible (though not optimized) value.
Does the inner cross-validation necessarily need to be leave-one-subject-out as well?
If you do inner cross validation it would be better to do it subject-wise as well: without this, you'll get overly optimistic inner results. Which would not a problem iff the bias were constant. However, it usually isn't and you have the additional problem that due to the random uncertainty together with the optimistic bias you may observe many models that seem to be perfect. Among these you cannot distinguish (after all, they all seem to be perfect) which can completely mess up the optimization.
Again, with so few subjects I'd avoid this inner optimization and fix the parameter.
Cross validation with k folds means you will have to split you data set in k disjoint groups. In your case for 10-folds you split your data set in 10 disjoint groups each with 400 samples ($G_i$ with $i$ from 1 to 10). Usually the groups should have roughly the same size.
Now do the following:
- Train your classifier on $Train_1 = G_2\cup G_3 \cup ... \cup G_{10}$ and test it on $Test_1 = G_1$. Save test results for later use.
- Train your classifier on $Train_2 = G_1 \cup G_3 \cup .. \cup G_{10}$ and test on $Test_2 = G_2$ and save results for later use.
- Repeat another 8 steps and collect the results.
Now you have for each instance of your dataset, how it was classified, since the reunion of all $Test_i$ is the original data set (each group $G_i$ is tested once). You can measure how do you like the errors.
Now there are a couple of things which I believe you have to pay some attention. You said you have 20 target classes and 4000 samples. I do not know about your specific problem, but it does not seem to have plenty of data. So, I believe is better to do multiple cross validations and average the results, thus you decrease the chance to get too biased results.
Another thing to pay attention for is how do you build your folds. You might use simple random sampling, but I believe is better to use a stratified random procedure. Thus you increase the chances to have a usable CV estimation.
You might also consider bootstrap testing if you do not have enough instances for a 10-fold cross validation with stratified sampling.
Best Answer
You have indeed correctly described the way to work with crossvalidation. In fact, you are 'lucky' to have a reasonable validation set at the end, because often, crossvalidation is used to optimize a model, but no "real" validation is done.
As @Simon Stelling said in his comment, crossvalidation will lead to lower estimated errors (which makes sense because you are constantly reusing the data), but fortunately this is the case for all models, so, barring catastrophy (i.e.: errors are only reduced slightly for a "bad" model, and more for "the good" model), selecting the model that performs best on a crossvalidated criterion, will typically also be the best "for real".
A method that is sometimes used to correct somewhat for the lower errors, especially if you are looking for parsimoneous models, is to select the smallest model/simplest method for which the crossvalidated error is within one SD from the (crossvalidated) optimum. As crossvalidation itself, this is a heuristic, so it should be used with some care (if this is an option: make a plot of your errors against your tuning parameters: this will give you some idea whether you have acceptable results)
Given the downward bias of the errors, it is important to not publish the errors or other performance measure from the crossvalidation without mentioning that these come from crossvalidation (although, truth be told: I have seen too many publications that don't mention that the performance measure was obtained from checking the performance on the original dataset either --- so mentioning crossvalidation actually makes your results worth more). For you, this will not be an issue, since you have a validation set.
A final warning: if your model fitting results in some close competitors, it is a good idea to look at their performances on your validation set afterwards, but do not base your final model selection on that: you can at best use this to soothe your conscience, but your "final" model must have been picked before you ever look at the validation set.
Wrt your second question: I believe Simon has given your all the answers you need in his comment, but to complete the picture: as often, it is the bias-variance trade-off that comes into play. If you know that, on average, you will reach the correct result (unbiasedness), the price is typically that each of your individual calculations may lie pretty far from it (high variance). In the old days, unbiasedness was the nec plus ultra, in current days, one has accepted at times a (small) bias (so you don't even know that the average of your calculations will result in the correct result), if it results in lower variance. Experience has shown that the balance is acceptable with 10-fold crossvalidation. For you, the bias would only be an issue for your model optimization, since you can estimate the criterion afterwards (unbiasedly) on the validation set. As such, there is little reason not to use crossvalidation.