Solved – Is accuracy an improper scoring rule in a binary classification setting

accuracyprobabilityscoring-rules

I have recently been learning about proper scoring rules for probabilistic classifiers. Several threads on this website have made a point of emphasizing that accuracy is an improper scoring rule and should not be used to evaluate the quality of predictions generated by a probabilistic model such as logistic regression.

However, quite a few academic papers I have read have given misclassification loss as an example of a (non-strict) proper scoring rule in a binary classification setting. The clearest explanation I could find was in this paper, at bottom of page 7. To the best of my understanding, minimizing misclassification loss is equivalent to maximizing accuracy, and the equations in the paper make sense intuitively.

For example: using the notation of the paper, if the true conditional probability (given some feature vector x) of the class of interest is η = 0.7, any forecast q > 0.5 would have an an expected loss R(η|q) = 0.7(0) + 0.3(1) = 0.3, and any q $\leq$ 0.5 would have an expected loss of 0.7. The loss function would therefore be minimized at q = η = 0.7 and consequently proper; the generalization to the entire range of true conditional probabilities and forecasts seems straightforward enough from there.

Assuming the above calculations and statements are correct, the drawbacks of a non-unique minimum and all predictions above 0.5 sharing the same minimum expected loss are obvious. I still see no reason to use accuracy over the traditional alternatives such as log score, Brier score, etc. However, is it correct to say that accuracy is a proper scoring rule when evaluating probabilistic models in a binary setting, or am I making a mistake – either in my understanding of misclassification loss, or in equating it with accuracy?

Best Answer

TL;DR

Accuracy is an improper scoring rule. Don't use it.

The slightly longer version

Actually, accuracy is not even a scoring rule. So asking whether it is (strictly) proper is a category error. The most we can say is that under additional assumptions, accuracy is consistent with a scoring rule that is improper, discontinuous and misleading. (Don't use it.)

Your confusion

Your confusion stems from the fact that misclassification loss as per the paper you cite is not a scoring rule, either.

The details: scoring rules vs. classification evaluations

Let us fix terminology. We are interested in a binary outcome $y\in\{0,1\}$, and we have a probabilistic prediction $\widehat{q} = \widehat{P}(Y=1)\in(0,1)$. We know that $P(Y=1)=\eta>0.5$, but our model $\widehat{q}$ may or may not know that.

A scoring rule is a mapping that takes a probabilistic prediction $\widehat{q}$ and an outcome $y$ to a loss,

$$ s\colon (\widehat{q},y) \mapsto s(\widehat{q},y). $$

$s$ is proper if it is optimized in expectation by $\widehat{q}=\eta$. ("Optimized" usually means "minimized", but some authors flip signs and try to maximize a scoring rule.) $s$ is strictly proper if it is optimized in expectation only by $\widehat{q}=\eta$.

We will typically evaluate $s$ on many predictions $\widehat{q}_i$ and corresponding outcomes $y_i$ and average to estimate this expectation.

Now, what is accuracy? Accuracy does not take a probabilistic prediction as an argument. It takes a classification $\widehat{y}\in\{0,1\}$ and an outcome:

$$ a\colon (\widehat{y},y)\mapsto a(\widehat{y},y) = \begin{cases} 1, & \widehat{y}=y \\ 0, & \widehat{y} \neq y. \end{cases} $$

Therefore, accuracy is not a scoring rule. It is a classification evaluation. (This is a term I just invented; don't go looking for it in the literature.)

Now, of course we can take a probabilistic prediction like our $\widehat{q}$ and turn it into a classification $\widehat{y}$. But to do so, we will need the additional assumptions alluded to above. For instance, it is very common to use a threshold $\theta$ and classify:

$$ \widehat{y}(\widehat{q},\theta) := \begin{cases} 1, & \widehat{q}\geq \theta \\ 0, & \widehat{q}<\theta. \end{cases} $$

A very common threshold value is $\theta=0.5$. Note that if we use this threshold and then evaluate the accuracy over many predictions $\widehat{q}_i$ (as above) and corresponding outcomes $y_i$, then we arrive exactly at the misclassification loss as per Buja et al. Thus, misclassification loss is also not a scoring rule, but a classification evaluation.

If we take a classification algorithm like the one above, we can turn a classification evaluation into a scoring rule. The point is that we need the additional assumptions of the classifier. And that accuracy or misclassification loss or whatever other classification evaluation we choose may then depend less on the probabilistic prediction $\widehat{q}$ and more on the way we turn $\widehat{q}$ into a classification $\widehat{y}=\widehat{y}(\widehat{q},\theta)$. So optimizing the classification evaluation may be chasing after a red herring if we are really interested in evaluating $\widehat{q}$.

Now, what is improper about these scoring-rules-under-additional-assumptions? Nothing, in the present case. $\widehat{q}=\eta$, under the implicit $\theta =0.5$, will maximize accuracy and minimize misclassification loss over all possible $\widehat{q}\in(0,1)$. So in this case, our scoring-rule-under-additional-assumptions is proper.

Note that what is important for accuracy or misclassification loss is only one question: do we classify ($\widehat{y}$) everything as the majority class or not? If we do so, accuracy or misclassification loss are happy. If not, they aren't. What is important about this question is that it has only a very tenuous connection to the quality of $\widehat{q}$.

Consequently, our scoring-rules-under-additional-assumptions are not strictly proper, as any $\widehat{q}\geq\theta$ will lead to the same classification evaluation. We might use the standard $\theta=0.5$, believe that the majority class occurs with $\widehat{q}=0.99$ and classify everything as the majority class, because $\widehat{q}\geq\theta$. Accuracy is high, but we have no incentive to improve our $\widehat{q}$ to the correct value of $\eta$.

Or we might have done an extensive analysis of the asymmetric costs of misclassification and decided that the best classification probability threshold should actually be $\theta =0.2$. For instance, this could happen if $y=1$ means that you suffer from some disease. It might be better to treat you even if you don't suffer from the disease ($y=0$), rather than the other way around, so it might make sense to treat people even if there is a low predicted probability (small $\widehat{q}$) they suffer from it. We might then have a horrendously wrong model that believes that the true majority class only occurs with $\widehat{q}=0.25$ - but because of the costs of misclassification, we still classify everything as this (assumed) minority class, because again $\widehat{q}\geq\theta$. If we did this, accuracy or misclassification loss would make us believe we are doing everything right, even if our predictive model does not even get which one of our two classes is the majority one.

Therefore, accuracy or misclassification loss can be misleading.

In addition, accuracy and misclassification loss are improper under the additional assumptions in more complex situations where the outcomes are not iid. Frank Harrell, in his blog post Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules cites an example from one of his books where using accuracy or misclassification loss will lead to a misspecified model, since they are not optimized by the correct conditional predictive probability.

Another problem with accuracy and misclassification loss is that they are discontinuous as a function of the threshold $\theta$. Frank Harrell goes into this, too.

More information can be found at Why is accuracy not the best measure for assessing classification models?.

The bottom line

Don't use accuracy. Nor misclassification loss.

The nitpick: "strict" vs. "strictly"

Should we be talking about "strict" proper scoring rules, or about "strictly" proper scoring rules? "Strict" modifies "proper", not "scoring rule". (There are "proper scoring rules" and "strictly proper scoring rules", but no "strict scoring rules".) As such, "strictly" should be an adverb, not an adjective, and "strictly" should be used. As is more common in the literature, e.g., the papers by Tilmann Gneiting.

Related Question