Hi everyone,
I am working with a convolutional neural network (GoogLeNet) but instead of using classic "full" images, I am working with patches cropped out of the images. In other words, each class contains several images (which are actually subfolders), and each image contains several patches (the png files).
I wrote a simple function that reads n random patches (png files) belonging to m random images at every run, and was wondering how to implement it in the training process. I basically want to use those n randomly generated training png files (minibatch) at every iteration. Should this be done within the "trainNetwork" function?
Is there any question/example that deals with this topic?
Thank you very much.
Best regards
Best Answer