What I need to do is train a classification network (like Pattern Recognition Tool) where each sample would have a different weight. The contribution of a sample to the network error would be proportional to its weight.
For example, given samples with higher and lower weights; after training the network would classify the samples with higher weights with a more success while sacrificing some correct classification of the samples with lower weights.
Does anyone know how to do this?
Currently my only idea on how to achieve this goal would be: For each iteration of a loop: 1. randomly assemble a subset of samples with a chance of picking a sample proportional to its weight. 2. train for 1 epoch