I'm currently working on a project involving using different sets of data as a predictor to predict the outcome of out-sample data. I use AUC (Area under the Curve of ROC) to compare the performances of each set of data.
I am familiar with the theory behind AUC and ROC, but I'm wondering is there a precise standard for assessing AUC, for example, if an AUC outcome is over 0.75, it will be classified as a 'GOOD AUC', or below 0.55, it will be classified as a 'BAD AUC'.
Is there such a standard, or AUC is always for comparing only?
Best Answer
From the comments: