Solved – Dropping one of the columns when using one-hot encoding

categorical datacategorical-encodingdiscrete datamachine learningregression

My understanding is that in machine learning it can be a problem if your dataset has highly correlated features, as they effectively encode the same information.

Recently someone pointed out that when you do one-hot encoding on a categorical variable you end up with correlated features, so you should drop one of them as a "reference".

For example, encoding gender as two variables, is_male and is_female, produces two features which are perfectly negatively correlated, so they suggested just using one of them, effectively setting the baseline to say male, and then seeing if the is_female column is important in the predictive algorithm.

That made sense to me but I haven't found anything online to suggest this may be the case, so is this wrong or am I missing something?

Possible (unanswered) duplicate: Does collinearity of one-hot encoded features matter for SVM and LogReg?

Best Answer

This depends on the models (and maybe even software) you want to use. With linear regression, or generalized linear models estimated by maximum likelihood (or least squares) (in R this means using functions lm or glm), you need to leave out one column. Otherwise you will get a message about some columns "left out because of singularities"$^\dagger$.

But if you estimate such models with regularization, for example ridge, lasso er the elastic net, then you should not leave out any columns. The regularization takes care of the singularities, and more important, the prediction obtained may depend on which columns you leave out. That will not happen when you do not use regularization$^\ddagger$. See the answer at How to interpret coefficients of a multinomial elastic net (glmnet) regression which supports this view (with a direct quote from one of the authors of glmnet).

With other models, use the same principles. If the predictions obtained depends on which columns you leave out, then do not do it. Otherwise it is fine.

So far, this answer only mentions linear (and some mildly non-linear) models. But what about very non-linear models, like trees and randomforests? The ideas about categorical encoding, like one-hot, stems mainly from linear models and extensions. There is little reason to think that ideas derived from that context should apply without modification for trees and forests! for some ideas see Random Forest Regression with sparse data in Python.

$^\dagger$ But, using factor variables, R will take care of that for you.

$^\ddagger$ Trying to answer extra question in comment: When using regularization, most often iterative methods are used (as with lasso or elasticnet) which do not need matrix inversion, so that the design matrix do not have full rank is not a problem. With ridge regularization, matrix inversion may be used, but in that case the regularization term added to the matrix before inversion makes it invertible. That is a technical reason, a more profound reason is that removing one column changes the optimization problem, it changes the meaning of the parameters, and it will actually lead to different optimal solutions. As a concrete example, say you have a categorical variable with three levels, 1,2 and 3. The corresponding parameters is $\beta_, \beta_2, \beta_3$. Leaving out column 1 leads to $\beta_1=0$, while the other two parameters change meaning to $\beta_2-\beta_1, \beta_3-\beta_1$. So those two differences will be shrinked. If you leave out another column, other contrasts in the original parameters will be shrinked. So this changes the criterion function being optimized, and there is no reason to expect equivalent solutions! If this is not clear enough, I can add a simulated example (but not today).

Related Question