Solved – cross-validation with batch gradient descent

cross-validationgradient descentmachine learningtensorflow

I have a question about using batch gradient descent along with cross-validation.

When using batch gradient Descent, data is split into batches which are used to train the model and update the parameters.
On the other hand, cross-validation consists on splitting training data to multiple folds and train the model using some folds and testing with the rest.

Is it meaningful to use both techniques in the same training process?

Best Answer

You are mixing apples with oranges here. Whatever supervised training technique you use (whether it is batch, mini-batch or stochastic gradient descent) you still need to split your data into training, validation and testing sets. (btw, batch gradient descent technically refers to computing gradient on the whole dataset, what you are referring to is called mini-batch).

When using k-fold cross-validation, you just don't hardcode which part of data is used for training and which is used for validation. These splits change through each iteration of k-fold cycle.

To answer your question, yes it makes sense to combine batch gradient descent (or rather mini-batch) with cross validation. You will always need to pick some training algorithm and a way to split your data into different (train, test, validation) sets and these two tasks are not really dependent on each other (but of course k-fold cross-validation will increase your training time so if you pick slow training algorithm, you may end up with a task that will simply take too much time).