Solved – Why use d-prime instead of percent correct

biasd-primerocsignal detection

In signal detection theory, people often use $d'$ to assess performance. Apart from the fact that $d'$ is in $z$ units (units of measurement transformed to standard deviation units, i.e., $z$ scores), making it comparable regardless of the original units of measurement. I can't see what the advantage in analysing $d'$ instead of proportion correct is.

Don't both account for bias? Would they follow the same shaped ROC curves?

Best Answer

D' is a measure of sensitivity, whereas proportion correct is affected by both sensitivity and bias.

In the special case of two balanced classes (same number of signal and noise trials) and zero bias, D' is monotonically mapped to proportion correct, offering no additional insight. However, if the two classes are not completely balanced or the bias isn't zero, the two measures can considerably depart. Consider these two examples:

  1. A dataset with 70% signal trials and 30% noise trials. An observer / classifier always responding 'signal' will have 0.7 proportion correct but zero D'.

  2. A dataset with balanced classes and a classifier with D'=1. A zero bias would produce a maximal proportion correct and any increase / decrease in the bias are excepted to decrease the proportion correct (think of the case of extreme biases).

Related Question