Solved – Why Lasso or ElasticNet perform better than Ridge when the features are correlated

elastic netlassoregressionregularizationridge regression

I have a set of 150 features, and many of them are highly correlated with each other. My goal is to predict the value of a discrete variable, whose range is 1-8. My sample size is 550, and I am using 10-fold cross-validation.

AFAIK, among the regularization methods (Lasso, ElasticNet, and Ridge), Ridge is more rigorous to correlation among the features. That is why I expected that with Ridge, I should obtain a more accurate prediction. However, my results show that the mean absolute error of Lasso or Elastic is around 0.61 whereas this score is 0.97 for the ridge regression. I wonder what would be an explanation for this. Is this because I have many features, and Lasso performs better because it makes a sort of feature selection, getting rid of the redundant features?

Best Answer

Suppose you have two highly correlated predictor variables $x,z$, and suppose both are centered and scaled (to mean zero, variance one). Then the ridge penalty on the parameter vector is $\beta_1^2 + \beta_2^2$ while the lasso penalty term is $ \mid \beta_1 \mid + \mid \beta_2 \mid$. Now, since the model is supposed highly colinear, so that $x$ and $z$ more or less can substitute each other in predicting $Y$, so many linear combination of $x, z$ where we simply substitute in part $x$ for $z$, will work very similarly as predictors, for example $0.2 x + 0.8 z, 0.3 x + 0.7 z$ or $0.5 x + 0.5 z$ will be about equally good as predictors. Now look at these three examples, the lasso penalty in all three cases are equal, it is 1, while the ridge penalty differ, it is respectively 0.68, 0.58, 0.5, so the ridge penalty will prefer equal weighting of colinear variables while lasso penalty will not be able to choose. This is one reason ridge (or more generally, elastic net, which is a linear combination of lasso and ridge penalties) will work better with colinear predictors: When the data give little reason to choose between different linear combinations of colinear predictors, lasso will just "wander" while ridge tends to choose equal weighting. That last might be a better guess for use with future data! And, if that is so with present data, could show up in cross validation as better results with ridge.

We can view this in a Bayesian way: Ridge and lasso implies different prior information, and the prior information implied by ridge tend to be more reasonable in such situations. (This explanation here I learned , more or less, from the book: "Statistical Learning with Sparsity The Lasso and Generalizations" by Trevor Hastie, Robert Tibshirani and Martin Wainwright, but at this moment I was not able to find a direct quote).


But the OP seems to have a different problem:

However, my results show that the mean absolute error of Lasso or Elastic is around 0.61 whereas this score is 0.97 for the ridge regression

Now, lasso is also effectively doing variable selection, it can set some coefficients exactly to zero. Ridge cannot do that (except with probability zero.) So it might be that with the OP data, among the colinear variables, some are effective and others don't act at all (and the degree of colinearity sufficiently low that this can be detected.) See When should I use lasso vs ridge? where this is discussed. A detailed analysis would need more information than is given in the question.