Solved – Constant Accuracy with decreasing loss

accuracyconv-neural-networkdeep learningloss-functionsmachine learning

I am fairly new to Cross Validated section so I apologize if my question structure is incorrect.
I am currently working on Fully Convolutional Networks for Semantic Segmentation.

I am first trying to build FCN-32 model from this paper. So for this, I am using VGG16 pre-trained model with feature extraction layer as fixed i.e. freezing the feature extraction layer. After freezing these layers I replaced the classifier layers with upsampling layers i.e. Transposed convolution so as to obtain which has the same dimensions as that of the input.

Paper suggests that we should train our network for 175 epochs or greater with SGD with momentum=0.9 learning_rate = 1e-4 with weight_decay = 5^(-4).

After settings, these exact parameters my pixel-wise accuracy and mean intersection over union(IOU) is remaining constant at 69% and 0.138 or 13.8% but loss decreases very slowly.

According to my knowledge, Accuracy should increase if the loss is decreasing right? What might be the reason for this constant accuracy and decreasing loss? Am I doing something wrong over here?

Best Answer

You can have a decreasing loss with the same accuracy if the algorithm gets more and more sure of the points it identified before. For example, if the NNet predicted a vector $(0.6, 0.6, 0.4)$, by optimising the weights, the prediction can change to $(0.99, 0.99, 0.01)$ - now the algorithm predicts exactly the same labels as before (because of the rounding) but has a lower loss.