Machine Learning – Difference Between Test Set and Validation Set Explained

machine learningvalidation

I found this confusing when I use the neural network toolbox in Matlab.
It divided the raw data set into three parts:

  1. training set
  2. validation set
  3. test set

I notice in many training or learning algorithm, the data is often divided into 2 parts, the training set and the test set.

My questions are:

  1. what is the difference between validation set and test set?
  2. Is the validation set really specific to neural network? Or it is optional.
  3. To go further, is there a difference between validation and testing in context of machine learning?

Best Answer

Typically to perform supervised learning, you need two types of data sets:

  1. In one dataset (your "gold standard"), you have the input data together with correct/expected output; This dataset is usually duly prepared either by humans or by collecting some data in a semi-automated way. But you must have the expected output for every data row here because you need this for supervised learning.

  2. The data you are going to apply your model to. In many cases, this is the data in which you are interested in the output of your model, and thus you don't have any "expected" output here yet.

While performing machine learning, you do the following:

  1. Training phase: you present your data from your "gold standard" and train your model, by pairing the input with the expected output.
  2. Validation/Test phase: in order to estimate how well your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input, etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.)
  3. Application phase: now, you apply your freshly-developed model to the real-world data and get the results. Since you usually don't have any reference value in this type of data (otherwise, why would you need your model?), you can only speculate about the quality of your model output using the results of your validation phase.

The validation phase is often split into two parts:

  1. In the first part, you just look at your models and select the best performing approach using the validation data (=validation)
  2. Then you estimate the accuracy of the selected approach (=test).

Hence the separation to 50/25/25.

In case if you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your set that you basically have only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then.

See also this question.

Related Question