Solved – Using glmnet to solve the LASSO problem

lassoregularization

I have recently been made aware of the Lasso algorithm and found that the package
glmnet can be used to solve it. (I have the glmnet package on R).

If I have a matrix $A$ and a vector $y$ how do I solve the lasso problem of
minimizing $\|y-Ax\| + \lambda \|x\|_1$? Also, how do I solve the constrained problem: minimize $\|y-Ax\|$ s.t. $\|x\|_1 \leq t$?

Any help is much appreciated.

Thanks

Best Answer

glmnet is more setup to solve the first formulation of the L1 penalty instead of the size constraint. glmnet has a fairly unique method of solving the problem, but is probably the best out there. For most applications, you just feed it your design matrix in the x parameter and the response in the y parameter. The x needs to already have factors built into dummies/contrasts. I would usually recommend doing full dummy coding instead of the default treatment coding unless you have a true reference category to shrink towards.

You need to then set your alpha to choose the balance between Lasso vs Ridge. Alpha = 1 will give you full LASSO. Usually you would want alpha=0.95 to have a touch of ridging to handle any lingering multicollinearity and just give a bit better conditioning to the problem in general. If you're more focused on predictive accuracy than parsimony then setting alpha much further towards the ridge side of things is usually better (Alpha ~ 0.05).

glmnet will then fit a whole string of lambda values. It starts with the smallest lambda that produces an intercept only model and then slowly lowers the lambda by ~8 orders of magnitude, making a handful of discrete estimates along the way. You can control how deep it goes and how many stops via some lambda specific parameters.

Lastly, you'll probably want to use the cv.glmnet function to pick a good value for lambda. The cross validation will give you a good estimate of lambda to use when making predictions on new data. Make sure the resulting lambda sequence reaches a minimum. It usually produces a nice visual of the bias-variance tradeoff, but sometimes you'll have to create a longer sequence of lambdas if not much penalty is really needed. Just be sure your custom sequence starts with an intercept-only model and progresses smoothly like the default sequence.

Of minor notes, do be sure you choose the correct distribution for your response. And do know that the design matrix can be passed in sparse format if you have some factors with many levels. And read the useful vignettes available on the CRAN glmnet webpage. Good luck!

Update - Dummy Coding Contrast snippet:

There is likely a better way, but this should get the job done. This changes the default factor encoding in formulas to be full dummy coding:

> orig.contrasts <- options('contrasts')[[1]]

> contr.dummy <- function(...,contrasts=captured) contr.treatment(...,contrasts=FALSE)

> cust.contrasts <- c('contr.dummy','contr.poly')

> names(cust.contrasts) <- names(orig.contrasts)

> options(contrasts=cust.contrasts)

> options('contrasts')