Normally we cannot draw an ROC curve for the discrete classifiers like decision trees.
Am I right?
Is there any way to draw an ROC curve for Dtrees?
Solved – How we can draw an ROC curve for decision trees
cartroc
Related Solutions
Yes, there are situations where the usual receiver operating curve cannot be obtained and only one point exists.
SVMs can be set up so that they output class membership probabilities. These would be the usual value for which a threshold would be varied to produce a receiver operating curve.
Is that what you are looking for?Steps in the ROC usually happen with small numbers of test cases rather than having anything to do with discrete variation in the covariate (particularly, you end up with the same points if you choose your discrete thresholds so that for each new point only one sample changes its assignment).
Continuously varying other (hyper)parameters of the model of course produces sets of specificity/sensitivity pairs that give other curves in the FPR;TPR coordinate system.
The interpretation of a curve of course depends on what variation did generate the curve.
Here's a usual ROC (i.e. requesting probabilities as output) for the "versicolor" class of the iris data set:
- FPR;TPR (γ = 1, C = 1, varying probability threshold):
The same type of coordinate system, but TPR and FPR as function of the tuning parameters γ and C:
FPR;TPR (varying γ, C = 1, probability threshold = 0.5):
FPR;TPR (γ = 1, varying C, probability threshold = 0.5):
These plots do have a meaning, but the meaning is decidedly different from that of the usual ROC!
Here's the R code I used:
svmperf <- function (cost = 1, gamma = 1) {
model <- svm (Species ~ ., data = iris, probability=TRUE,
cost = cost, gamma = gamma)
pred <- predict (model, iris, probability=TRUE, decision.values=TRUE)
prob.versicolor <- attr (pred, "probabilities")[, "versicolor"]
roc.pred <- prediction (prob.versicolor, iris$Species == "versicolor")
perf <- performance (roc.pred, "tpr", "fpr")
data.frame (fpr = perf@x.values [[1]], tpr = perf@y.values [[1]],
threshold = perf@alpha.values [[1]],
cost = cost, gamma = gamma)
}
df <- data.frame ()
for (cost in -10:10)
df <- rbind (df, svmperf (cost = 2^cost))
head (df)
plot (df$fpr, df$tpr)
cost.df <- split (df, df$cost)
cost.df <- sapply (cost.df, function (x) {
i <- approx (x$threshold, seq (nrow (x)), 0.5, method="constant")$y
x [i,]
})
cost.df <- as.data.frame (t (cost.df))
plot (cost.df$fpr, cost.df$tpr, type = "l", xlim = 0:1, ylim = 0:1)
points (cost.df$fpr, cost.df$tpr, pch = 20,
col = rev(rainbow(nrow (cost.df),start=0, end=4/6)))
df <- data.frame ()
for (gamma in -10:10)
df <- rbind (df, svmperf (gamma = 2^gamma))
head (df)
plot (df$fpr, df$tpr)
gamma.df <- split (df, df$gamma)
gamma.df <- sapply (gamma.df, function (x) {
i <- approx (x$threshold, seq (nrow (x)), 0.5, method="constant")$y
x [i,]
})
gamma.df <- as.data.frame (t (gamma.df))
plot (gamma.df$fpr, gamma.df$tpr, type = "l", xlim = 0:1, ylim = 0:1, lty = 2)
points (gamma.df$fpr, gamma.df$tpr, pch = 20,
col = rev(rainbow(nrow (gamma.df),start=0, end=4/6)))
roc.df <- subset (df, cost == 1 & gamma == 1)
plot (roc.df$fpr, roc.df$tpr, type = "l", xlim = 0:1, ylim = 0:1)
points (roc.df$fpr, roc.df$tpr, pch = 20,
col = rev(rainbow(nrow (roc.df),start=0, end=4/6)))
As I see it, the possibility to refuse classification as "too uncertain" is the whole point of choosing a threshold (as opposed to assigning the class with highest predicted probability).
Of course, you should have some justification for putting the threshold to 0.5: you may also put it up to 0.9 or any other value that is reasonable.
You describe a setup with mutually exclusive classes (closed-world problem). "No class reaches the threshold" can always happen as soon as that threshold is higher than 1/$n_{classes}$, i.e. the same problem occurs in a 2-class problem with threshold, say, 0.9. For threshold = 1/$n_{classes}$ it could happen in theory, but in practice it is highly unlikely.
So your problem is not related (just more pronounced) to the 3-class set-up.
To your second question: you can compute ROC curves for any kind of continuous output scores, they don't even need to claim that they are probabilities. Personally, I don't calibrate, because I don't want to waste another test set on that (I work with very restricted sample sizes). The shape of the ROC anyways won't change.
Answer to your comment: The ROC conceptually belongs to a set-up that in my field is called single-class classification: does a patient have a particular disease or not. From that point of view, you can assign a 10% probability that the patient does have the disease. But this does not imply that with 90% probability he has something defined - the complementary 90% actually belong to a "dummy" class: not having that disease. For some diseases & tests, finding everyone may be so important that you set your working point at a threshold of 0.1. Textbook example where you choose an extreme working point is HIV test in blood donations.
So for constucting the ROC for class A (you'd say: the patient is A positive), you look at class A posterior probabilities only. For binary classification with probability (not A) = 1 - probability (A), you don't need to plot the second ROC as it does not contain any information that is not readily accessible from the first one.
In your 3 class set up you can plot a ROC for each class. Depending on how you choose your threshold, no classification, exactly one class, or more than one class assigned can result. What is sensible depends on your problem. E.g. if the classes are "Hepatitis", "HIV", and "broken arm", then this policy is appropriate as a patient may have none or all of these.
Best Answer
If your classifier produces only factor outcomes (only labels), without scores, you still can draw a ROC curve. However this ROC curve is only a point. Considering the ROC space, this points is $(x,y) = (\text{FPR}, \text{TPR})$, where $\text{FPR}$ - false positive rate and $\text{TPR}$ - true positive rate.
See more on how this is computed on Wikipedia page.
You can extend this point to look like a ROC curve by drawing a line from $(0,0)$ to your point, and from there to $(1,1)$. Thus you have a curve.
However, for a decision tree is easy to extend from an label output to a numeric output. Note that when you predict with a decision tree you go down from the root node to a leaf node, where you predict with majority class. If instead of that class you would return the proportion of classes in that leaf node, you would have a score for each class. Suppose that you have two classes $\text{T}$ and $\text{F}$, and in your leaf node you have 10 instances with $\text{T}$ and 5 instances with $\text{F}$, you can return a vector of scores: $(\text{score}_T, \text{score}_F) = ( \frac{\text{count}_T}{\text{count}_T + \text{count}_F}, \frac{\text{count}_F}{\text{count}_T + \text{count}_F}) = (10/15, 5/15) = (0.66, 0.33)$. Take care that this is really note a proper scoring rule (this are not the best estimator for probabilities), but is better than nothing I believe, and this is how usually scores are retrieved for decision trees.