There are several math-heavy papers that describe the Bayesian Lasso, but I want tested, correct JAGS code that I can use.
Could someone post sample BUGS / JAGS code that implements regularized logistic regression? Any scheme (L1, L2, Elasticnet) would be great, but Lasso is preferred. I also wonder if there are interesting alternative implementation strategies.
Best Answer
Since L1 regularization is equivalent to a Laplace (double exponential) prior on the relevant coefficients, you can do it as follows. Here I have three independent variables x1, x2, and x3, and y is the binary target variable. Selection of the regularization parameter $\lambda$ is done here by putting a hyperprior on it, in this case just uniform over a good-sized range.
Let's try it out using the
dclone
package in R!And here are the results, compared to an unregularized logistic regression:
And we can see that the three
b
parameters have indeed been shrunk towards zero.I don't know much about priors for the hyperparameter of the Laplace distribution / the regularization parameter, I'm sorry to say. I tend to use uniform distributions and look at the posterior to see if it looks reasonably well-behaved, e.g., not piled up near an endpoint and pretty much peaked in the middle w/o horrible skewness problems. So far, that's typically been the case. Treating it as a variance parameter and using the recommendation(s) by Gelman Prior distributions for variance parameters in hierarchical models works for me, too.