Solved – Cross validation and parameter optimization

cross-validationoptimizationparameterization

I have a question about the parameter optimization when I use the 10-fold cross validation.

I want to ask that whether the parameters should fix or not during every fold's model training , i.e. (1) select one set of optimized parameters for every fold's average accuracy.

or

(2) I should find the optimized parameter for every fold and then every fold uses different optimized parameters to train its model then test on the fold's test data respectively, and finally average every fold's accuracy as result?

Which one is the correct method for cross validation? Thanks a lot.

Best Answer

Let us firstly distinguish between two sets of parameters: model parameters (e.g. weights for features in regression), and parameters to the learning algorithm (and hyperparameters). The purpose of cross-validation is to identify learning parameters that generalise well across the population samples we learn from in each fold.

More specifically: We globally search over the space over learning parameters, but within each fold, we fix learning parameters and learn model parameters. The outcome should be learning parameters that produce on average the best performance in all folds. We can then use these to train a model on the entire dataset.