Solved – Recall equals to accuracy but different to precision

accuracybinary dataclassificationprecision-recall

I've read this question, and basically I'm having the same issue.

I'm dealing with a binary classification problem.

I'm calculating the precision, recall and f1 using scikit method called precision_recall_fscore_support. I'm explicitly passing average='weighted'.

For the accuracy, I'm using also a scikit method, accuracy_score.

Every time I run my program I get the same vaues for recall and accuracy. I've also calculated the accuracy by hand (number of correct predictions divided by total number of predictions) and the results are the same from accuracy_score method.


I can see that the recall calculation will work the same as the accuracy calculation if we consider all prediction results (no matter the class), but shouldn't the average='weighted' calculate the recall for each class separately and get an an weighted average from it?

Best Answer

Let's assume your confusion matrix looks like this

TN FP


FN TP

  • TN+FP will be number in support for negative class in sklearn.
  • FN+TP will be number in support for positive class in sklearn.
  • The support is the number of samples of the true response that lie in that class.

Accuracy is calculated by (TN+TP)/(TN+FN+FP+TP). For weighted Recall, it's calculated by taking the weighted mean of Positive Recall and Negative recall, where negative recall = TN/(TN+FP) and positive recall = TP/(TP+FN).

Summing them up by their weights gives you: Weighted recall = Negative recall * (TN+FP)/(TN+FN+TP+FP) + Positive recall * (TP+FN)/(TN+FN+TP+FP) = TN / (TN+FN+TP+FP) + TP/(TN+FN+TP+FP) = Accuracy.

Related Question