Solved – How to draw the ROC curve

machine learningrocsensitivity-specificity

Currently I'm asking me how to draw the ROC curve (Receiver Operating Characteristic curve). On the x-axis there is the false positive rate (1-specificity) and on the y-axis there is the true positive rate (sensitivity). I'm using nested cross-validation. If I evaluate the model I'm getting one point of the ROC curve.

I have looked it up on Google and I found two ways:

  1. Evaluating the model for many paramter combinations (model tuning). For each parameter combination there is one point on the ROC curve.

  2. For each probability cutoff point there is one point on the ROC curve.

Which one is the right one or is there another way to do it?

Best Answer

An ROC curve visualizes the performance of a model in different configurations (= cutoffs), and hence the second option is the right way.

In the first option you are somehow plotting points of different models (same learning approach with different hyperparameters), which is not related to ROC curves. In fact, which points would you even plot of these different models to make them somewhat calibrated and comparable? All points with $P(\hat{Y}=1) = 0.5$?