Solved – How are performance measures affected in PU learning

classificationmachine learningsemi-supervised-learning

When learning from only positive and unlabelled data (PU learning), how are performance measures affected, when compared to a standard supervised setting?

For simplicity, let's assume that the entire unlabelled set is treated as negative.

For example, intuitively, I think that the number of true positives will be underestimated due to the fact that some of the unlabelled observations are positive in reality.

However, I can't really wrap my head around other measures. What happens to these?

  • True Positives
  • True Negatives
  • False Positives
  • False Negatives
  • Accuracy
  • AUC
  • Precision
  • Recall/Sensitivity
  • Specificity

Best Answer

Introduction

Many practical applications have only positive and unlabeled data (aka PU learning), which poses problems in building and evaluating classifiers. Evaluating classifiers using only positive and unlabeled data is a tricky task, and can only be done by making some assumptions, which may or may not be reasonable for a real problem.

Shameless self-advertisement: For a detailed overview, I suggest reading my paper on the subject.


I will describe the main effects of the PU learning setting on performance metrics that are based on contingency tables. A contingency table relates the predicted labels to the true labels:

+---------------------+---------------------+---------------------+
|                     | positive true label | negative true label |
+---------------------+---------------------+---------------------+
| positive prediction | true positive       | false positive      |
| negative prediction | false negative      | true negative       |
+---------------------+---------------------+---------------------+

The problem in PU learning is that we don't know the true labels, which affects all cells in the contingency table (not just the last column!). It is impossible to make claims about the effect of the PU learning setting on performance metrics without making additional assumptions. For example, if your known positives are biased you can't make any reliable inference (this is common!).


Treating the unlabeled set as negative

A common simplification used in PU learning is to treat the unlabeled set as if it is negative, and then compute metrics as if the problem is fully supervised. Sometimes this is good enough, but this can be detrimental in many cases. I highly recommend against it.

Effect on precision. Say we want to compute precision:

$$p = \frac{TP}{TP + FP}.$$

Now, suppose we have a perfect classifier if we would know the true labels (i.e., no false positives, $p=1$). In the PU learning setting, using the approximation that the unlabeled set is negative, only a fraction of (in reality) true positives are marked as such, while the rest will be considered false positives, immediately yielding $\hat{p} < 1$. Obviously this is wrong, but it gets worse: the estimation error can be arbitrarily large, depending on the fraction of known positives over latent positives. Suppose only 1% of positives are known, and the rest is in the unlabeled set, then (still with a perfect classifier), we would get $\hat{p} = 0.01$ ... Yuck!

Effect on other metrics:

  • True Positives: underestimated
  • True Negatives: overestimated
  • False Positives: overestimated
  • False Negatives: underestimated
  • Accuracy: depends on balance and classifier

For AUC, sensitivity and specificity I recommend reading the paper as describing it in sufficient detail here would take us too far.


Start from the rank distribution of known positives

A reasonable assumption is that the known positives are a representative subset of all positives (e.g., they are a random, unbiased sample). Under this assumption, the distribution of decision values of known positives can be used as a proxy for the distribution of decision values of all positives (and hence also associated ranks). This assumption enables us to compute strict bounds on all entries of the contingency table, which then translates into (guaranteed!) bounds on all derived performance metrics.

A crucial observation we've made is that in the PU learning context under the assumption mentioned above is that the bounds on most performance metrics are a function of the fraction of positives in the unlabeled set ($\beta$). We have shown that computing (bounds on) performance metrics without an estimate of $\beta$ is basically impossible, as the bounds are then no longer strict.

Related Question