I wanted to speed up my neural network training so upgraded from a GTX1080 to a Titan V expecting a large increase in performance due to improved architecture, memory speed, etc.
Well, the 1080 is crushing the Titan V.
Transfer learning on alexnet and training on the same pool of images with identical settings
opts = trainingOptions('sgdm','InitialLearnRate',0.001, 'Plots', 'training-progress', 'MiniBatchSize', 512)
the Titan moves at approximately 164 seconds per iteration while the 1080 is cruising at a 62 seconds per iteration.
I'm flabbergasted that a GPU that is outclassed in every way somehow manages to win by such a large margin.
Does anyone have a similar experience or any explanation for why this might be happening?
Thanks in advance.
L.
Best Answer