PCA for Feature Selection – Using Principal Component Analysis Effectively

feature selectionpca

I'm new to feature selection and I was wondering how you would use PCA to perform feature selection. Does PCA compute a relative score for each input variable that you can use to filter out noninformative input variables? Basically, I want to be able to order the original features in the data by variance or amount of information contained.

Best Answer

The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). You may recall that PCA seeks to replace $p$ (more or less correlated) variables by $k<p$ uncorrelated linear combinations (projections) of the original variables. Let us ignore how to choose an optimal $k$ for the problem at hand. Those $k$ principal components are ranked by importance through their explained variance, and each variable contributes with varying degree to each component. Using the largest variance criteria would be akin to feature extraction, where principal component are used as new features, instead of the original variables. However, we can decide to keep only the first component and select the $j<p$ variables that have the highest absolute coefficient; the number $j$ might be based on the proportion of the number of variables (e.g., keep only the top 10% of the $p$ variables), or a fixed cutoff (e.g., considering a threshold on the normalized coefficients). This approach bears some resemblance with the Lasso operator in penalized regression (or PLS regression). Neither the value of $j$, nor the number of components to retain are obvious choices, though.

The problem with using PCA is that (1) measurements from all of the original variables are used in the projection to the lower dimensional space, (2) only linear relationships are considered, and (3) PCA or SVD-based methods, as well as univariate screening methods (t-test, correlation, etc.), do not take into account the potential multivariate nature of the data structure (e.g., higher order interaction between variables).

About point 1, some more elaborate screening methods have been proposed, for example principal feature analysis or stepwise method, like the one used for 'gene shaving' in gene expression studies. Also, sparse PCA might be used to perform dimension reduction and variable selection based on the resulting variable loadings. About point 2, it is possible to use kernel PCA (using the kernel trick) if one needs to embed nonlinear relationships into a lower dimensional space. Decision trees, or better the random forest algorithm, are probably better able to solve Point 3. The latter allows to derive Gini- or permutation-based measures of variable importance.

A last point: If you intend to perform feature selection before applying a classification or regression model, be sure to cross-validate the whole process (see ยง7.10.2 of the Elements of Statistical Learning, or Ambroise and McLachlan, 2002).


As you seem to be interested in R solution, I would recommend taking a look at the caret package which includes a lot of handy functions for data preprocessing and variable selection in a classification or regression context.