Binary Classification – Optimizing AUC vs Logloss

aucbinary dataclassificationlog-loss

I am performing a binary classification task where the outcome probability is fair low (aroung 3%). I am trying to decide whether to optimize by AUC or log-loss. As much as I have understood, AUC maximizes the model's ability to discriminate between classes whilst the logloss penalizes the divergency between actual and estimated probabilities. In my task is extremely important to calibrate the precision accuracy. So I would choose logloss, but I wonder whether the best log-loss model should also be the best AUC / GINI models.

Best Answer

As you mention, AUC is a rank statistic (i.e. scale invariant) & log loss is a calibration statistic. One may trivially construct a model which has the same AUC but fails to minimize log loss w.r.t. some other model by scaling the predicted values. Consider:

auc <-  function(prediction, actual) {
  mann_whit <- wilcox.test(prediction~actual)$statistic
  1 - mann_whit / (sum(actual)*as.double(sum(!actual)))
}

log_loss <- function (prediction, actual) {
  -1/length(prediction) * sum(actual * log(prediction) + (1-actual) * log(1-prediction))
}

sampled_data <- function(effect_size, positive_prior = .03, n_obs = 5e3) {
  y <- rbinom(n_obs, size = 1, prob = positive_prior)
  data.frame( y = y,
              x1 =rnorm(n_obs, mean = ifelse(y==1, effect_size, 0)))
}

train_data <- sampled_data(4)
m1 <- glm(y~x1, data = train_data, family = 'binomial')
m2 <- m1
m2$coefficients[2] <- 2 * m2$coefficients[2]

m1_predictions <- predict(m1, newdata = train_data, type= 'response')
m2_predictions <- predict(m2, newdata = train_data, type= 'response')

auc(m1_predictions, train_data$y)
#0.9925867 
auc(m2_predictions, train_data$y)
#0.9925867 

log_loss(m1_predictions, train_data$y)
#0.01985058
log_loss(m2_predictions, train_data$y)
#0.2355433

So, we cannot say that a model maximizing AUC means minimized log loss. Whether a model minimizing log loss corresponds to maximized AUC will rely heavily on the context; class separability, model bias, etc. In practice, one might consider a weak relationship, but in general they are simply different objectives. Consider the following example which grows the class separability (effect size of our predictor):

for (effect_size in 1:7) {
  results <- dplyr::bind_rows(lapply(1:100, function(trial) {
                                    train_data <- sampled_data(effect_size)
                                    m <- glm(y~x1, data = train_data, family = 'binomial')
                                    predictions <- predict(m, type = 'response')
                                    list(auc = auc(predictions, train_data$y),
                                         log_loss = log_loss(predictions, train_data$y),
                                         effect_size = effect_size)
                                  }))
  plot(results$auc, results$log_loss, main = paste("Effect size =", effect_size))
  readline()
}

effect_1

enter image description here