Solved – Perform cross-validation on train set or entire data set

cross-validation

I am a bit confused concerning how I should perform cross-validation to evaluate a statistical learning model.

If I have a data set of 500 observations, should i divide it in a train and test set with for example 375 (75%) train observations and 125 (25%) test observations, and perform cross-validation on the train set? Or should I perform the cross-validation on the entire data set?

So long as the aim of performing cross-validation is to acquire a more robust estimate of the test MSE, and not to optimize some tuning parameter, my understanding is that you should use the entire data set. The reason for this is that you would not acquire a model that you can use to predict on unseen test observations, just a measure of MSE for the train set where cross-validation is performed.

If I am mistaken, how could I use the cross-validation result to predict out of sample observations?

Could someone be kind and clarify this for me?

If relevant, the problem I am solving is performing cross-validation to assess the model performance of a random forest model in R. Thanks in advance!

Best Answer

should i divide it in a train and test set with for example 375 (75%) train observations and 125 (25%) test observations, and perform cross-validation on the train set?

Yes

Or should I perform the cross-validation on the entire data set?

No

The test set should be handled independently of the training set so you could do a separate CV block for the test set if you really wish, and may provide some useful insight but is not universal practice. CV may be useful if you plan to apply the model to a completely new set of ‘real world’ data. Given that the test set is drawn from the same population as the training set this may not be that useful as you would expect it to have similar charcateristics to the training set if the split was performed correctly and without bias. Mind you, may be worth checking this assumption.

What is the purpose of CV?

So long as the aim of performing cross-validation is to acquire a more robust estimate of the test MSE

This is not the purpose of CV, rather it is to estimate the robustness of your performance metrics. As @user86895 states it does not measure MSE, see Mean squared error versus Least squared error, which one to compare datasets? for further reading. CV creates multiple models on subsets of the data and applies them to the data withheld from that subset. It iterates over the dataset, building new models until all have been included in training subsets and all have been included in test subsets. The final model is built on all the training set not any of the individual CV round models, the purpose of CV is not to build models but to assess stability of the model performance, i.e. how generalisable the model is.

When comparing different data processing or analysis algorithms on a dataset it provides a first filter to identify the work pathways that provide the most stable models. It does this by providing estimates of how variable the performance is between sub-sets of your training set. This allows you to detect models with a very high risk of overfitting and filter them out. Without cross validation you would be picking based solely on the maximum performance without concern to its stability. But when you come to apply a model in a deployed situation its stability (relevance across the real world population) will be more important than moderate differences in raw performance on a subset of curated samples (i.e you original experimental set).

Cross validation is in fact essential for choosing the crudest parameters for a model such as number of components in PCA or PLS using the Q2 statistic (which is R2 but on the held out data, see What is the Q² value for each component of a PCA) to determine when overfitting starts to degrade model performance.

If I am mistaken, how could I use the cross-validation result to predict out of sample observations?

I am taking this to mean 'how can I use CV result to estimate performance beyond my experimental set?', but will update this section of my answer if it is clarified differently.

CV is used as a first line estimate of model stability, not to estimate performance in real world settings. The only way to do this is to test the final model in a real-world situation. What CV does is provide you a risk analysis, if it appears stable then you could decide it is time to risk the model on a real-world test. If it is not stable then you need to probably expand your training set considerably (ensuring an even representation of important sub groups and confounding factors as these are one source, other than random noise, of overfitting as all relevant variation needs to be given an equal exposure to the model building process to be properly weighted for) and build a new model.

And a note on real world validation, if it works it doesn’t prove your model is generalisable, only that it works under the specific mechnisms whereby it has been deployed in the real-world.

Related Question