Solved – What’s the difference between Leave-One-Out and K-Fold Cross validation

cross-validationdata mining

As far as I know in K-fold cross validation the samples are split into k sets and at round k-1 of these are used for the training of the model and the last one is used for testing the model and estimating the error of the model. Totally k measurements are done and finally is made the mean of the errors.

So, if my description of the k-fold is more or less correct, what's the difference from Leave-One-Out Cross validation?

EDIT: Actually I don't care about the value of k, I simply don't see the difference between LOO and K-fold Cross validation.

Best Answer

Leave-one-out fits the model with k-1 observations and classifies the remaining observation left out. It differs from your description because this process is repeated another k-1 times with a different observation left out. You can learn about this from the original paper by Lachenbruch and Mickey in 1968. In my answer I am treating k as the full sample size. In k-fold cross-validation it has a different meaning.