Solved – Can the alpha, lambda values of a glmnet object output determine whether ridge or Lasso

caretcross-validationgeneralized linear modelregression

Given a glmnet object using train() where trControl method is "cv" and number of iterations is 5, I obtained that the bestTune alpha and lambda values are alpha=0.1 and lambda= 0.007688342. On running the glmnet object, I notice that the alpha values start from 0.1.
Can the inference here be that the method used is Lasso and not ridge because of the non-negative alpha value?

In general, can the values of alpha, lambda indicate which model is being used?

Best Answer

As far as I understand glmnet, $\alpha=0$ would actually be a ridge penalty, and $\alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.

The penalty with $\alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $\alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $\alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $\alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.

[You may want to check whether there was some other reason that $\alpha$ might have been at an endpoint; e.g. I seem to recall $\lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]