For NN training it would be more useful to have a 3 part stratified division into train, val and test sets.
To make life easy, one could choose Nval = Ntst = M = round(N/k) and Ntrn = N-2*M.
Then choose the indices so that each of k*M examples is in the test set at least once and the validation set at least once.
For N=94, k= 10, Nval = Ntst = 9, Ntrn = 76
The val and test indices are used k times with the scrambled index vector
S = randperm(N) in order to get an unbiased selection of indices for each of the k folds.
If N is sufficiently large, the fact that abs(N-k*M) examples will never be in one of the two nontraining subsets will not be significant. Otherwise, additional code might be desired.
Of course, one alternative to using a validation set is to use regularization to avoid overtraining an overfit net. In general, it is done one of two ways
trainbr with it's default form of msereg
fitnet or patternnet with the regularization option
Note that even though the default performance function for patternnet is crossentropy, the regularization option should still work.
That brings up the question of whether trainbr can be used with crossentropy.
Hope this helps.
Thank you for formally accepting my answer
Greg
Best Answer