Solved – Performing Cross Validation to Compare Lasso and Other Regression Models in R

cross-validationglmnetrregression

I have built some "regular" and robust regression models, using the standard lm function as well as rlm and lmrob. While I know that there is some discussion about using stepwise regression, I have used the stepAIC function to prune my variable set. After I've gotten a reduced set of variables using stepAIC, I've then run some robust regressions.

The cvTools package allows me to use cross validation to compare the performance of my various models.

I obviously would like to run lasso regression (ideally using the glmnet package) on my dataset.

My question is whether or not there is an already built package/functionality that will allow me to use cross validiation to compare the lasso regression model with the other models.

If there is not, then my initial thought had been to go back to first principles and manually code K-fold cross validation for lasso regression. However, I am now wondering if this is theoretically a good idea. Each time I run a fold in my manual CV, I would run cv.glmnet on the training set. Each training set would most likely result in a different lambda.min and lambda.1se.

My question is: is it technically proper CV to determine the overall CV error by averaging the error on each fold given that the lambda chosen for each fold will be producing a different lasso result?

Here is some sample code that I have used to create leave-one-out CV on the dataset to evaluate the lasso regression. I have computed my cross validation error on each fold using lambda.1se and labmda.min that arise for that fold.

lassocv<-function() {

len<-length(drx$DR)

errmin<-0
err1se<-0
print(len)

for (i in 1:len) {
    gmer<-data.matrix(drx[-i,])
    yxer<-yx[-i]
    lfit<-cv.glmnet(gmer, yxer)
    newr<-data.matrix(drx[i,])
    pmin<-predict(lfit,newx=newr,s=lfit$lambda.min)[[1]]
	    p1se<-predict(lfit,newx=newr,s=lfit$lambda.1se)[[1]]
    errmin<-errmin+abs(pmin-yx[i])
    err1se<-err1se+abs(p1se-yx[i])
}
print(errmin/len)
print(err1se/len)

}

However, I get different CV results. The two results that are returned for my dataset are 21.94867 and 23.74074.

Best Answer

I'm not entirely sure I understand precisely where in the analysis pipeline your question is, but I think I can address it by walking through the steps you'll want to take. The software portion of your question is off-topic on CV, but the questions about CV are on-topic, so I'll answer those.

My question is: is it technically proper CV to determine the overall CV error by averaging the error on each fold given that the lambda chosen for each fold will be producing a different lasso result?

The elementary model development process is usually presented with respect to three partitions of your whole data set: train, test and validate. Training and test data are used together to tune model hyperparameters. Validation data is used to assess the performance of alternative models against data that wasn't used in model construction. The notion is that this is representative of new data that the model might encounter.

A slightly more sophisticated elaboration on this process is nested cross-validation. This is preferred because, across the whole process, all data is eventually used in testing and training the model. Instead of using one partitioning of the data, you can do CV partitioning on the whole data set (the outer partition) and then again on the data left over when you hold out one of the outer partitions (the inner set). Here, you tune model hyperparameters on the inner set and have out-of-sample performance evaluated on the outer holdout set. The final model is prepared by composing a final partition over the entire data set, using CV to select a final tuple of hyperparameters and then, at last, estimating a single model on all available data given that selected tuple. In this way, the model building process kind of telescopes on itself, collapsing CV steps as we estimate the final model.

It doesn't matter that alternative inner sets might give you different $\lambda_\text{min}$. What you're characterizing with your out-of-sample performance metrics is the model selection process itself. At the end of the day, you'll still only estimate one model, and that's the value of $\lambda_\text{min}$ that you care about. In the preceeding steps, you don't need to know the particular value of $\lambda_\text{min}$ except as a means to achieve out-of-sample estimates.

While I know that there is some discussion about using stepwise regression, I have used the stepAIC function to prune my variable set.

This is a bit of an understatement: it's not a discussion, it's a consensus that stepwise results are dubious. If you're fitting a lasso anyway, you can get statistically valid model by omitting the stepwise regression step from your analysis. Moreover, since the lasso step won't "see" the stepwise step, your results will have too-narrow error bands and cross-validation results will be irreparably biased. And lasso makes the entire stepwise step pointless anyway, because they solve the same problem! Lasso solves all of the variable selection problems that stepwise attempts to while avoiding the wealth of widely-accepted criticisms of stepwise strategies. There's no downside to using lasso on its own in this case. I'm convinced the only reason stepwise methods are included in R is for pedagogical reasons, and so that the functionality is available should someone need to demonstrate why it's hazardous.