I found that the intercept in GLMnet is computed after the new coefficients updates have converged. The intercept is computed with the means of the $y_i$'s and the mean of the $x_{ij}$'s. The formula is siimilar to the previous one I gave but with the $\beta_j$'s after the update loop : $\beta_0=\bar{y}-\sum_{j=1}^{p} \hat{\beta_j} \bar{x_j}$.
In python this gives something like :
self.intercept_ = ymean - np.dot(Xmean, self.coef_.T)
which I found here on scikit-learn page.
EDIT : the coefficients have to be standardized before :
self.coef_ = self.coef_ / X_std
$\beta_0=\bar{y}-\sum_{j=1}^{p} \frac{\hat{\beta_j} \bar{x_j}}{\sum_{i=1}^{n} x_{ij}^2}$.
glmnet optimizes the following loss function:
$\sum_{i=1}^n (\hat{Y}_i-Y_i)^2 + \lambda\left(\frac{(1-\alpha)}{2}||\beta||_2^2 + \alpha ||\beta||_1 \right)$
The residual sums of squares is on the left, as typical with regression, and the penalization for the coefficients is on the right. $\alpha$ defaults to 1, which gives the LASSO penalty.
Now, if you don't fit an intercept, the term on the left will be very large (if $E(Y)$ is large). The model will try to account for that, but it will require larger coefficient values to account for the intercept. It may be the case (and I'm guessing here) that you have $E(Y)$ large and one of your variables is fairly constant. In that case, that variable will get a large coefficient (as it helps to reduce the SSR), but other variables increase the penalization to much and hence there coefficients are zero.
Maybe you could supply your own lambda sequence to the function, something like
lambda=10^seq(1,-4,-.5)
If $\lambda$ is small enough, you should get more non-zero coefficients in the model without an intercept as well.
Note: I don't think this problem has anything to do with the fact you're using cv.glmnet. You should see the same thing if you just use glmnet.
Best Answer
For compleness' sake (and because I accidentally bumped in to this question): starting with version 1.9-3, fitting without intercepts is supported (intercept=FALSE).