Solved – the “binary:logistic” objective function in XGBoost

boostingdata miningmachine learning

I am reading through Chen's XGBoost paper. He writes that during the $\text{t}^{\text{th}}$ iteration, the objective function below is minimised.

$$ L^{(t)} = \sum_{i}^n l(y_i, \hat{y}_i^{(t-1)} + f_t(x_i)) + \Omega (f_t)$$

Here, $l$ is a differentiable convex loss function, $f_t$ represents the $\text{t}^{\text{th}}$ tree and $\hat{y}_i^{(t-1)}$ represents the prediction of the $\text{i}^{\text{th}}$ instance at iteration $t-1$.

I was wondering what $l$ is when using XGBoost for binary classification?

Best Answer

It appears there is an option objective: "binary:logistic"

“binary:logistic” –logistic regression for binary classification, output probability

“binary:logitraw” –logistic regression for binary classification, output score before logistic transformation

See http://xgboost.readthedocs.io/en/latest/parameter.html

(so log loss)

Related Question