Solved – ROC curve and confusion matrix in classifier performance evaluation

classificationdata miningmachine learningroc

I applied two different classifiers against the same validation set. It turns out that classifier A is better than classifier B in terms of ROC curve. However, classifier B is better than classifier A in terms of confusion matrix. How to explain this kind of contradiction?

Best Answer

A ROC curve shows you performance across a range of different classification thresholds and a confusion matrix only shows you one (typically when $Pr(y > .5)$).