MATLAB: Activations of freezed layers are different between before/after training, why

activationsdeep learningDeep Learning Toolboxfreeze layersParallel Computing Toolbox

I follow the example "transfer-learning-using-googlenet" where, the last 3 layers ('loss3-classifier','prob','output') are replaced with 3 new ones. Then I 'freeze' the first 141 layers (that is up to and including 'pool5-drop_7x7_s1'):
layers(1:141) = freezeWeights(layers(1:141));
lgraph = createLgraphUsingConnections(layers,connections);
Then I follow fine-tuning.
Since 'pool5-7x7_s1' is BEFORE 'pool5-drop_7x7_s1', I would expect that the following two vectors were the same:
b_orig= activations(net_orig, I, 'pool5-7x7_s1');
b_tune= activations(net_tune, I, 'pool5-7x7_s1');
but they aren't!… Any idea why?
p.s. I also tried the activation of several other layers BEFORE 'pool5-drop_7x7_s1', and I got different vectors…. 'I' is an image, 'net_orig=googlenet;', and 'net_tune' is the resulting net after tuning.

Best Answer

The vectors are different because when you fine tune on a new dataset, the average image in "imageInputLayer" is recalculated for your new dataset.