If you order 1 million ridge-shrunk, scaled, but non-zero features, you will have to make some kind of decision: you will look at the n best predictors, but what is n? The LASSO solves this problem in a principled, objective way, because for every step on the path (and often, you'd settle on one point via e.g. cross validation), there are only m coefficients which are non-zero.
Very often, you will train models on some data and then later apply it to some data not yet collected. For example, you could fit your model on 50.000.000 emails and then use that model on every new email. True, you will fit it on the full feature set for the first 50.000.000 mails, but for every following email, you will deal with a much sparser and faster, and much more memory efficient, model. You also won't even need to collect the information for the dropped features, which may be hugely helpful if the features are expensive to extract, e.g. via genotyping.
Another perspective on the L1/L2 problem exposed by e.g. Andrew Gelman is that you often have some intuition what your problem may be like. In some circumstances, it is possible that reality is truly sparse. Maybe you have measured millions of genes, but it is plausible that only 30.000 of them actually determine dopamine metabolism. In such a situation, L1 arguably fits the problem better.
In other cases, reality may be dense. For example, in psychology, "everything correlates (to some degree) with everything" (Paul Meehl). Preferences for apples vs. oranges probably does correlate with political leanings somehow - and even with IQ. Regularization might still make sense here, but true zero effects should be rare, so L2 might be more appropriate.
Suppose you have two highly correlated predictor variables $x,z$, and suppose both are centered and scaled (to mean zero, variance one). Then the ridge penalty on the parameter vector is $\beta_1^2 + \beta_2^2$ while the lasso penalty term is $ \mid \beta_1 \mid + \mid \beta_2 \mid$. Now, since the model is supposed highly colinear, so that $x$ and $z$ more or less can substitute each other in predicting $Y$, so many linear combination of $x, z$ where we simply substitute in part $x$ for $z$, will work very similarly as predictors, for example $0.2 x + 0.8 z, 0.3 x + 0.7 z$ or $0.5 x + 0.5 z$ will be about equally good as predictors. Now look at these three examples, the lasso penalty in all three cases are equal, it is 1, while the ridge penalty differ, it is respectively 0.68, 0.58, 0.5, so the ridge penalty will prefer equal weighting of colinear variables while lasso penalty will not be able to choose. This is one reason ridge (or more generally, elastic net, which is a linear combination of lasso and ridge penalties) will work better with colinear predictors: When the data give little reason to choose between different linear combinations of colinear predictors, lasso will just "wander" while ridge tends to choose equal weighting. That last might be a better guess for use with future data! And, if that is so with present data, could show up in cross validation as better results with ridge.
We can view this in a Bayesian way: Ridge and lasso implies different prior information, and the prior information implied by ridge tend to be more reasonable in such situations. (This explanation here I learned , more or less, from the book: "Statistical Learning with Sparsity The Lasso and Generalizations" by Trevor Hastie, Robert Tibshirani and Martin Wainwright, but at this moment I was not able to find a direct quote).
But the OP seems to have a different problem:
However, my results show that the mean absolute error of Lasso or
Elastic is around 0.61 whereas this score is 0.97 for the
ridge regression
Now, lasso is also effectively doing variable selection, it can set some coefficients exactly to zero. Ridge cannot do that (except with probability zero.) So it might be that with the OP data, among the colinear variables, some are effective and others don't act at all (and the degree of colinearity sufficiently low that this can be detected.) See When should I use lasso vs ridge? where this is discussed. A detailed analysis would need more information than is given in the question.
Best Answer
As the answer from Frank Harrell on the page that you linked puts it:
I don't know that there can be a general "proof" of when one or the other of ridge or LASSO will work better in terms of mean squared error (MSE). A reasonable way to approach this issue, however, is to consider the principle that you don't gain anything by throwing away information.
In textbook explanations of LASSO you will find scenarios where only a handful of uncorrelated predictors are truly related to outcome while the other predictors are random and unrelated to outcome. Applying that principle, when you truly do have a small fraction of non-zero coefficients, you are not throwing away any useful information by discarding the others. LASSO works well in such cases; ridge will necessarily keep some spurious coefficients whose values depend heavily on the sample at hand, leading to a model that might not generalize well out of sample.
In many real-life situations, however, you have a set of predictors that are correlated with each other while many are associated with outcome to some extent. In that case, if you use LASSO you will only choose one or a few predictors from a set of correlated predictors, and thus you throw away information the discarded predictors might provide for out-of-sample applications of your model. Ridge regression, in contrast, keeps some information from all of the predictors.
Whether or not it provides better predictions in terms of MSE, ridge at least should help to protect from the luck of the draw in situations where one of a set of correlated predictors has an unusually high relation to outcome in the sample at hand versus the population as a whole, and thus is selected and overweighted by LASSO. So unless practical considerations with very large numbers of predictors make it unwieldy, or you know going into the study that only a few predictors are related to outcome, ridge provides a reasonable general choice.