There is quite a bit of terminological confusion in this area. Personally, I always find it useful to come back to a confusion matrix to think about this. In a classification / screening test, you can have four different situations:
Condition: A Not A
Test says “A” True positive | False positive
----------------------------------
Test says “Not A” False negative | True negative
In this table, “true positive”, “false negative”, “false positive” and “true negative” are events (or their probability). What you have is therefore probably a true positive rate and a false negative rate. The distinction matters because it emphasizes that both numbers have a numerator and a denominator.
Where things get a bit confusing is that you can find several definitions of “false positive rate” and “false negative rate”, with different denominators.
For example, Wikipedia provides the following definitions (they seem pretty standard):
- True positive rate (or sensitivity): $TPR = TP/(TP + FN)$
- False positive rate: $FPR = FP/(FP + TN)$
- True negative rate (or specificity): $TNR = TN/(FP + TN)$
In all cases, the denominator is the column total. This also gives a cue to their interpretation: The true positive rate is the probability that the test says “A” when the real value is indeed A (i.e., it is a conditional probability, conditioned on A being true). This does not tell you how likely you are to be correct when calling “A” (i.e., the probability of a true positive, conditioned on the test result being “A”).
Assuming the false negative rate is defined in the same way, we then have $FNR = 1 - TPR$ (note that your numbers are consistent with this). We cannot however directly derive the false positive rate from either the true positive or false negative rates because they provide no information on the specificity, i.e., how the test behaves when “not A” is the correct answer. The answer to your question would therefore be “no, it's not possible” because you have no information on the right column of the confusion matrix.
There are however other definitions in the literature. For example, Fleiss (Statistical methods for rates and proportions) offers the following:
- “[…] the false positive rate […] is the proportion of people, among those responding positive who are actually free of the disease.”
- “The false negative rate […] is the proportion of people, among those responding negative on the test, who nevertheless have the disease.”
(He also acknowledges the previous definitions but considers them “wasteful of precious terminology”, precisely because they have a straightforward relationship with sensitivity and specificity.)
Referring to the confusion matrix, it means that $FPR = FP / (TP + FP)$ and $FNR = FN / (TN + FN)$ so the denominators are the row totals. Importantly, under these definitions, the false positive and false negative rates cannot directly be derived from the sensitivity and specificity of the test. You also need to know the prevalence (i.e., how frequent A is in the population of interest).
Fleiss does not use or define the phrases “true negative rate” or the “true positive rate” but if we assume those are also conditional probabilities given a particular test result / classification, then @guill11aume answer is the correct one.
In any case, you need to be careful with the definitions because there is no indisputable answer to your question.
I would say that there might not be any particular or only one measure which you should take into account.
Last time when I did probabilistic classification I had a R package ROCR and explicit cost values for the False Positives and False Negatives.
I considered all cutoff-points from 0 to 1 and used many measures such as expected cost when selecting this cutoff - point. Of course I had already AUC measure for the general measure of classifying accuracy. But for me this was not the only possibility.
Values for the FP and FN cases must come outside your particular model, maybe these are provided by some subject matter expert?
For example in customer churn analysis it might be more expensive to incorrectly infer that customer is not churning but also that it will be expensive to give a general reduction in prices for services without accurary to target these to correct groups.
-Analyst
Best Answer
You are misunderstanding. The statement is standard—I've said a version of it myself many times. It applies to a given dataset and a specific model of it. Certainly, with better data, more informative variables, and a better model, you can do better on both metrics. If those are available to you, then have at it. In the end though, you have what you have. At that point, if you wanted to convert the model's output (say, a predicted probability) into a predicted category, you would need to compare the outputted value for an observation to a threshold and give that observation a thumbs up or thumbs down. Within those constraints, you can improve your false negative rate (or your false positive rate) by changing the threshold, but the other will get worse. That is, if you want predicted categories (and you don't necessarily have to get them) you will face a trade off between those two rates. The ROC curve shows you the set of trade offs available to you.