From what I know, using lasso for variable selection handles the problem of correlated inputs. Also, since it is equivalent to Least Angle Regression, it is not slow computationally. However, many people (for example people I know doing bio-statistics) still seem to favour stepwise or stagewise variable selection. Are there any practical disadvantages of using the lasso that makes it unfavourable?
Lasso Regression – Disadvantages of Using Lasso for Variable Selection in Regression
feature selectionlassoregression
Related Solutions
Such type of question is not really suitable for this site, since there can be no one correct answer to such a question. Here are my observations.
Your response variable is censored, so linear model may be not the best model, since clearly it will not produce the exact zeroes. I would look into some sort of censored regression, like tobit regression.
Since you want a regression model, forecasting the future values of the response variable involves forecasting the future values of the predictor variables. So when evaluating forecasting performance you should use the forecasts of predictor variables, not their out-of-sample values to get more reliable forecasting performance estimate.
I think there is no single answer to your question - it depends upon many situation, data and what you are trying to do. Some of the modification can be or should be modified to achieve the goal. However the following general discussion can help.
Before jumping to into the more advanced methods let's discussion of basic model first: Least Squares (LS) regression. There are two reasons why a least squares estimate of the parameters in the full model is unsatisfying:
Prediction quality: Least squares estimates often have a small bias but a high variance. The prediction quality can sometimes be improved by shrinkage of the regression coefficients or by setting some coefficients equal to zero. This way the bias increases, but the variance of the prediction reduces significantly which leads to an overall improved prediction. This tradeoff between bias and variance can be easily seen by decomposing the mean squared error (MSE). A smaller MSE lead to a better prediction of new values.
Interpretability: If many predictor variables are available, it makes sense to identify the ones who have the largest influence, and to set the ones to zero that are not relevant for the prediction. Thus we eliminate variables that will only explain some details, but we keep those which allow for the major explanation of the response variable.
Thus variable selection methods come into the scene. With variable selection only a subset of all input variables is used, the rest is eliminated from the model. Best subset regression finds the subset of size $k$ for each $k \in \{0, 1, ... , p\}$ that gives the smallest RSS. An efficient algorithm is the so-called Leaps and Bounds algorithm which can handle up to $30$ or $40$ regressor variables. With data sets larger than $40$ input variables a search through all possible subsets becomes infeasible. Thus Forward stepwise selection and Backward stepwise selection are useful. Backward selection can only be used when $n > p$ in order to have a well defined model. The computation efficiency of these methods is questionable when $p$ is very high.
In many situations we have a large number of inputs (as yours), often highly correlated (as in your case). In case of highly correlated regressors, OLS leads to a numerically instable parameters, i.e. unreliable $\beta$ estimates. To avoid this problem, we use methods that use derived input directions. These methods produce a small number of linear combinations $z_k, k = 1, 2, ... , q$ of the original inputs $x_j$ which are then used as inputs in the regression.
The methods differ in how the linear combinations are constructed. Principal components regression (PCR) looks for transformations of the original data into a new set of uncorrelated variables called principal components.
Partial Least Squares (PLS) regression - this technique also constructs a set of linear combinations of the inputs for regression, but unlike principal components regression it uses $y$ in addition to $X$ for this construction. We assume that both $y$ and $X$ are centered. Instead of calculating the parameters $\beta$ in the linear model, we estimate the parameters $\gamma$ in the so-called latent variable mode. We assume the new coefficients $\gamma$ are of dimension $q \le p$. PLS does a regression on a weighted version of $X$ which contains incomplete or partial information. Since PLS uses also $y$ to determine the PLS-directions, this method is supposed to have better prediction performance than for instance PCR. In contrast to PCR, PLS is looking for directions with high variance and large correlation with $y$.
Shrinkage methods keep all variables in the model and assign different (continuous) weights. In this way we obtain a smoother procedure with a smaller variability. Ridge regression shrinks the coefficients by imposing a penalty on their size. The ridge coefficients minimize a penalized residual sum of squares. Here $\lambda \ge 0$ is a complexity parameter that controls the amount of shrinkage: the larger the value of $\lambda$, the greater the amount of shrinkage. The coefficients are shrunk towards zero (and towards each other).
By penalizing the RSS we try to avoid that highly correlated regressors cancel each other. An especially large positive coefficient $\beta$ can be canceled by a similarly large negative coefficient $\beta$. By imposing a size constraint on the coefficients this phenomenon can be prevented.
It can be shown that PCR is very similar to ridge regression: both methods use the principal components of the input matrix $X$. Ridge regression shrinks the coefficients of the principal components, the shrinkage depends on the corresponding eigenvalues; PCR completely discards the components to the smallest $p - q$ eigenvalues.
The lasso is a shrinkage method like ridge, but the L1 norm rather than the L2 norm is used in the constraints. L1-norm loss function is also known as least absolute deviations (LAD), least absolute errors (LAE). It is basically minimizing the sum of the absolute differences between the target value and the estimated values. L2-norm loss function is also known as least squares error (LSE). It is basically minimizing the sum of the square of the differences between the target value ($Y_i$) and the estimated values. The difference between the L1 and L2 is just that L2 is the sum of the square of the weights, while L1 is just the sum of the weights. L1-norm tends to produces sparse coefficients and has Built-in feature selection. L1-norm does not have an analytical solution, but L2-norm does. This allows the L2-norm solutions to be calculated computationally efficiently. L2-norm has unique solutions while L1-norm does not.
Lasso and ridge differ in their penalty term. The lasso solutions are nonlinear and a quadratic programming algorithm is used to compute them. Because of the nature of the constraint, making $s$ sufficiently small will cause some of the coefficients to be exactly $0$. Thus the lasso does a kind of continuous subset selection. Like the subset size in subset selection, or the penalty in ridge regression, $s$ should be adaptly chosen to minimize an estimate of expected prediction error.
When $p\gg N$ , high variance and over fitting are a major concern in this setting. As a result, simple, highly regularized approaches often become the methods of choice.
Principal components analysis is an effective method for finding linear combinations of features that exhibit large variation in a dataset. But what we seek here are linear combinations with both high variance and significant correlation with the outcome. Hence we want to encourage principal component analysis to find linear combinations of features that have high correlation with the outcome - supervised principal components (see page 678, Algorithm 18.1, in the book Elements of Statistical Learning).
Partial least squares down weights noisy features, but does not throw them away; as a result a large number of noisy features can contaminate the predictions. Thresholded PLS can be viewed as a noisy version of supervised principal components, and hence we might not expect it to work as well in practice. Supervised principal components can yield lower test errors than Threshold PLS. However, it does not always produce a sparse model involving only a small number of features.
The lasso on the other hand, produces a sparse model from the data. Ridge can always perform average. I think lasso is good choice when there are large number $p$. Supervised principal component can also work well.
Best Answer
There is NO reason to do stepwise selection. It's just wrong.
LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think.
In many analyses, some variables should be in the model REGARDLESS of any measure of significance. Sometimes they are necessary control variables. Other times, finding a small effect can be substantively important.