1. All relevant data ( i.e., training, validation, testing and unseen ) are
assumed to be random samples from the same probability distribution.
2. Training data are assumed to be a representative sample of the whole
distribution.( However, performance estimates CAN BE EXTREMELY BIASED
because the same data is used for training and estimation).
3. Validation data are nontraining design data that are a sufficient
representative of nondesign data so that training is sufficient for the
model to be able to generalize to all relevant data. (Performance
estimates tend to be SLIGHTLY BIASED because validation data are still a
design data subset).
4. Testing data are nondesign data that are used to obtain UNBIASED
performance estimates on all nontraining data (validation + testing
+ unseen);
5. Now, to answer your question: If the above assumptions hold, the trained
net is completely represented by weights corresponding to the assumed
architecture.
Therefore if you suspect new data may not obey the assumptions, you need
a prequalifier stage to verify the input is likely to have come from the
appropriate probability distribution. This can be achieved in a variety
of ways by checking summary statistical characteristics like min, median,
mean, standard deviation and max or Mahalanobis distance to the original
distribution.
Best Answer