Solved – Regression in $p>n$ setting: how to choose regularization method (Lasso, PLS, PCR, ridge)

lassopartial least squarespcaregressionridge regression

I am trying see whether to go for ridge regression, LASSO, principal component regression (PCR), or Partial Least Squares (PLS) in a situation where there are large number of variables / features ($p$) and smaller number of samples ($n<p$), and my objective is prediction.

This my understanding:

  1. Ridge regression shrinks the regression coefficients, but uses all coefficients without making them $0$.

  2. LASSO also shrinks the coefficients, but also makes them $0$, meaning that it can do variable selection too.

  3. Principal component regression truncates the components so that $p$ becomes less than $n$; it will discard $p-n$ components.

  4. Partial least square also constructs a set of linear combinations of the inputs for regression, but unlike PCR it uses $y$ (in addition to $X$) for dimensionality reduction. The main practical difference between PCR and PLS regression is that PCR often needs more components than PLS to achieve the same prediction error (see here).

Consider the following dummy data (the actual data I am trying to work with is similar):

#random population of 200 subjects with 1000 variables 

M <- matrix(rep(0,200*100),200,1000)
for (i in 1:200) {
set.seed(i)
  M[i,] <- ifelse(runif(1000)<0.5,-1,1)
}
rownames(M) <- 1:200

#random yvars 
set.seed(1234)
u <- rnorm(1000)
g <- as.vector(crossprod(t(M),u))
h2 <- 0.5 
set.seed(234)
y <- g + rnorm(200,mean=0,sd=sqrt((1-h2)/h2*var(g)))

myd <- data.frame(y=y, M)

Implementation of four methods:

 require(glmnet)

 # LASSO 
 fit1=glmnet(M,y, family="gaussian", alpha=1)

 # Ridge   
 fit1=glmnet(M,y, family="gaussian", alpha=0)

 # PLS
 require(pls)
 fit3 <- plsr(y ~ ., ncomp = 198, data = myd, validation = "LOO")
 # taking 198 components and using leave-one-out cross validation 
 summary(fit3)
 plot(RMSEP(fit3), legendpos = "topright")

 # PCR 
 fit4 <- pcr(y ~ ., ncomp = 198, data = myd, validation = "LOO")

The best description of the data is:

  1. $p > n$, most of times $p>10n$;

  2. Variables ($X$ and $Y$) are correlated with each other with different degrees.

My question is which strategy may be best for this situation? Why?

Best Answer

I think there is no single answer to your question - it depends upon many situation, data and what you are trying to do. Some of the modification can be or should be modified to achieve the goal. However the following general discussion can help.

Before jumping to into the more advanced methods let's discussion of basic model first: Least Squares (LS) regression. There are two reasons why a least squares estimate of the parameters in the full model is unsatisfying:

  1. Prediction quality: Least squares estimates often have a small bias but a high variance. The prediction quality can sometimes be improved by shrinkage of the regression coefficients or by setting some coefficients equal to zero. This way the bias increases, but the variance of the prediction reduces significantly which leads to an overall improved prediction. This tradeoff between bias and variance can be easily seen by decomposing the mean squared error (MSE). A smaller MSE lead to a better prediction of new values.

  2. Interpretability: If many predictor variables are available, it makes sense to identify the ones who have the largest influence, and to set the ones to zero that are not relevant for the prediction. Thus we eliminate variables that will only explain some details, but we keep those which allow for the major explanation of the response variable.

Thus variable selection methods come into the scene. With variable selection only a subset of all input variables is used, the rest is eliminated from the model. Best subset regression finds the subset of size $k$ for each $k \in \{0, 1, ... , p\}$ that gives the smallest RSS. An efficient algorithm is the so-called Leaps and Bounds algorithm which can handle up to $30$ or $40$ regressor variables. With data sets larger than $40$ input variables a search through all possible subsets becomes infeasible. Thus Forward stepwise selection and Backward stepwise selection are useful. Backward selection can only be used when $n > p$ in order to have a well defined model. The computation efficiency of these methods is questionable when $p$ is very high.

In many situations we have a large number of inputs (as yours), often highly correlated (as in your case). In case of highly correlated regressors, OLS leads to a numerically instable parameters, i.e. unreliable $\beta$ estimates. To avoid this problem, we use methods that use derived input directions. These methods produce a small number of linear combinations $z_k, k = 1, 2, ... , q$ of the original inputs $x_j$ which are then used as inputs in the regression.

The methods differ in how the linear combinations are constructed. Principal components regression (PCR) looks for transformations of the original data into a new set of uncorrelated variables called principal components.

Partial Least Squares (PLS) regression - this technique also constructs a set of linear combinations of the inputs for regression, but unlike principal components regression it uses $y$ in addition to $X$ for this construction. We assume that both $y$ and $X$ are centered. Instead of calculating the parameters $\beta$ in the linear model, we estimate the parameters $\gamma$ in the so-called latent variable mode. We assume the new coefficients $\gamma$ are of dimension $q \le p$. PLS does a regression on a weighted version of $X$ which contains incomplete or partial information. Since PLS uses also $y$ to determine the PLS-directions, this method is supposed to have better prediction performance than for instance PCR. In contrast to PCR, PLS is looking for directions with high variance and large correlation with $y$.

Shrinkage methods keep all variables in the model and assign different (continuous) weights. In this way we obtain a smoother procedure with a smaller variability. Ridge regression shrinks the coefficients by imposing a penalty on their size. The ridge coefficients minimize a penalized residual sum of squares. Here $\lambda \ge 0$ is a complexity parameter that controls the amount of shrinkage: the larger the value of $\lambda$, the greater the amount of shrinkage. The coefficients are shrunk towards zero (and towards each other).

By penalizing the RSS we try to avoid that highly correlated regressors cancel each other. An especially large positive coefficient $\beta$ can be canceled by a similarly large negative coefficient $\beta$. By imposing a size constraint on the coefficients this phenomenon can be prevented.

It can be shown that PCR is very similar to ridge regression: both methods use the principal components of the input matrix $X$. Ridge regression shrinks the coefficients of the principal components, the shrinkage depends on the corresponding eigenvalues; PCR completely discards the components to the smallest $p - q$ eigenvalues.

The lasso is a shrinkage method like ridge, but the L1 norm rather than the L2 norm is used in the constraints. L1-norm loss function is also known as least absolute deviations (LAD), least absolute errors (LAE). It is basically minimizing the sum of the absolute differences between the target value and the estimated values. L2-norm loss function is also known as least squares error (LSE). It is basically minimizing the sum of the square of the differences between the target value ($Y_i$) and the estimated values. The difference between the L1 and L2 is just that L2 is the sum of the square of the weights, while L1 is just the sum of the weights. L1-norm tends to produces sparse coefficients and has Built-in feature selection. L1-norm does not have an analytical solution, but L2-norm does. This allows the L2-norm solutions to be calculated computationally efficiently. L2-norm has unique solutions while L1-norm does not.

Lasso and ridge differ in their penalty term. The lasso solutions are nonlinear and a quadratic programming algorithm is used to compute them. Because of the nature of the constraint, making $s$ sufficiently small will cause some of the coefficients to be exactly $0$. Thus the lasso does a kind of continuous subset selection. Like the subset size in subset selection, or the penalty in ridge regression, $s$ should be adaptly chosen to minimize an estimate of expected prediction error.

When $p\gg N$ , high variance and over fitting are a major concern in this setting. As a result, simple, highly regularized approaches often become the methods of choice.

Principal components analysis is an effective method for finding linear combinations of features that exhibit large variation in a dataset. But what we seek here are linear combinations with both high variance and significant correlation with the outcome. Hence we want to encourage principal component analysis to find linear combinations of features that have high correlation with the outcome - supervised principal components (see page 678, Algorithm 18.1, in the book Elements of Statistical Learning).

Partial least squares down weights noisy features, but does not throw them away; as a result a large number of noisy features can contaminate the predictions. Thresholded PLS can be viewed as a noisy version of supervised principal components, and hence we might not expect it to work as well in practice. Supervised principal components can yield lower test errors than Threshold PLS. However, it does not always produce a sparse model involving only a small number of features.

The lasso on the other hand, produces a sparse model from the data. Ridge can always perform average. I think lasso is good choice when there are large number $p$. Supervised principal component can also work well.