This depends on the models (and maybe even software) you want to use. With linear regression, or generalized linear models estimated by maximum likelihood (or least squares) (in R this means using functions lm
or glm
), you need to leave out one column. Otherwise you will get a message about some columns "left out because of singularities"$^\dagger$.
But if you estimate such models with regularization, for example ridge, lasso er the elastic net, then you should not leave out any columns. The regularization takes care of the singularities, and more important, the prediction obtained may depend on which columns you leave out. That will not happen when you do not use regularization$^\ddagger$. See the answer at How to interpret coefficients of a multinomial elastic net (glmnet) regression which supports this view (with a direct quote from one of the authors of glmnet
).
With other models, use the same principles. If the predictions obtained depends on which columns you leave out, then do not do it. Otherwise it is fine.
So far, this answer only mentions linear (and some mildly non-linear) models. But what about very non-linear models, like trees and randomforests? The ideas about categorical encoding, like one-hot, stems mainly from linear models and extensions. There is little reason to think that ideas derived from that context should apply without modification for trees and forests! for some ideas see Random Forest Regression with sparse data in Python.
$^\dagger$ But, using factor variables, R will take care of that for you.
$^\ddagger$ Trying to answer extra question in comment: When using regularization, most often iterative methods are used (as with lasso or elasticnet) which do not need matrix inversion, so that the design matrix do not have full rank is not a problem. With ridge regularization, matrix inversion may be used, but in that case the regularization term added to the matrix before inversion makes it invertible. That is a technical reason, a more profound reason is that removing one column changes the optimization problem, it changes the meaning of the parameters, and it will actually lead to different optimal solutions. As a concrete example, say you have a categorical variable with three levels, 1,2 and 3. The corresponding parameters is $\beta_, \beta_2, \beta_3$. Leaving out column 1 leads to $\beta_1=0$, while the other two parameters change meaning to $\beta_2-\beta_1, \beta_3-\beta_1$. So those two differences will be shrinked. If you leave out another column, other contrasts in the original parameters will be shrinked. So this changes the criterion function being optimized, and there is no reason to expect equivalent solutions! If this is not clear enough, I can add a simulated example (but not today).
Best Answer
One-hot encoding ensures that no implicit order is imposed on the feature while integer/label encoding benefits from it. If there is no inherent ordering, the usual approach is one-hot encoding, however sometimes (e.g. in high cardinality) other options can be preferred.
How you encode your features always matters because it changes the model's behavior. For example, in random forests, or simply decision trees, with label encoder it's possible to split the samples into two categories where one side is say Red, Blue and the other side is Green, Yellow if they are ordered as Red, Blue, Green, Yellow (i.e. 1,2,3,4), by splitting wrt value $2.5$. This is not possible in one split with one-hot encoding. However, it may or may not make sense doing this in the context of the problem. This naturally affects the branching of your tree(s) because of hyper-parameters like max depth.
Therefore, we can't say that OHE only matters for models where you multiply your features with coefficients.