I found that the intercept in GLMnet is computed after the new coefficients updates have converged. The intercept is computed with the means of the $y_i$'s and the mean of the $x_{ij}$'s. The formula is siimilar to the previous one I gave but with the $\beta_j$'s after the update loop : $\beta_0=\bar{y}-\sum_{j=1}^{p} \hat{\beta_j} \bar{x_j}$.
In python this gives something like :
self.intercept_ = ymean - np.dot(Xmean, self.coef_.T)
which I found here on scikit-learn page.
EDIT : the coefficients have to be standardized before :
self.coef_ = self.coef_ / X_std
$\beta_0=\bar{y}-\sum_{j=1}^{p} \frac{\hat{\beta_j} \bar{x_j}}{\sum_{i=1}^{n} x_{ij}^2}$.
Here's an unintuitive fact - you're not actually supposed to give glmnet a single value of lambda. From the documentation here:
Do not supply a single value for lambda (for predictions after CV use predict() instead).
Supply instead a decreasing sequence of lambda values. glmnet relies on its
warms starts for speed, and its often faster to fit a whole path than compute a
single fit.
cv.glmnet
will help you choose lambda, as you alluded to in your examples. The authors of the glmnet package suggest cv$lambda.1se
instead of cv$lambda.min
, but in practice I've had success with the latter.
After running cv.glmnet, you don't have to rerun glmnet! Every lambda in the grid (cv$lambda
) has already been run. This technique is called "Warm Start" and you can read more about it here. Paraphrasing from the introduction, the Warm Start technique reduces running time of iterative methods by using the solution of a different optimization problem (e.g., glmnet with a larger lambda) as the starting value for a later optimization problem (e.g., glmnet with a smaller lambda).
To extract the desired run from cv.glmnet.fit
, try this:
small.lambda.index <- which(cv$lambda == cv$lambda.min)
small.lambda.betas <- cv$glmnet.fit$beta[, small.lambda.index]
Revision (1/28/2017)
No need to hack to the glmnet object like I did above; take @alex23lemm's advice below and pass the s = "lambda.min"
, s = "lambda.1se"
or some other number (e.g., s = .007
) to both coef
and predict
. Note that your coefficients and predictions depend on this value which is set by cross validation. Use a seed for reproducibility! And don't forget that if you don't supply an "s"
in coef
and predict
, you'll be using the default of s = "lambda.1se"
. I have warmed up to that default after seeing it work better in a small data situation. s = "lambda.1se"
also tends to provide more regularization, so if you're working with alpha > 0, it will also tend towards a more parsimonious model. You can also choose a numerical value of s with the help of plot.glmnet to get to somewhere in between (just don't forget to exponentiate the values from the x axis!).
Best Answer
The smallest value of lambda for which no parameters are selected may be computed by
$\max_j \frac{1}{\alpha n} \sum_{i=1}^n [Y_i - \bar Y (1- \bar Y)] X_{ij}$
See my example:
This comes out of the coordinate descent algorithm from this paper: Friedman, Jerome, Trevor Hastie, and Rob Tibshirani. "Regularization paths for generalized linear models via coordinate descent." Journal of statistical software 33.1 (2010): 1.