MATLAB: How to select the components that show the most variance in PCA

classificationdimensionality reductionmachine learningMATLABpca

I have a huge data set that I need for training (32000*2500). This seems to be too much for my classifier. So I decided to do some reading on dimensionality reduction and specifically into PCA.
From my understanding PCA selects the current data and replots them on another (x,y) domain/scale. These new coordinates don't mean anything but the data is rearranged to give one axis maximum variation. After these new coefficients I can drop the cooeff having minimum variation.
Now I am trying to implement this in MatLab and am having trouble with the output provided. MatLab always considers rows as observations and columns as variables. So my inout to the pca function would be my matrix of size (32000*2500). This would return the PCA coefficients in an output matrix of size 2500*2500.
The help for pca states:
Each column of coeff contains coefficients for one principal component, and the columns are in descending order of component variance.
In this output, which dimension is the observations of my data? I mean if I have to give this to the classifier, will the rows of coeff represent my datas observations or is it now the columns of coeff?
And how do I remove the coefficients having the least variation? And thus effectively reduce the dimension of my data

Best Answer

Here is some code I wrote to help myself understand the MATLAB syntax for PCA.
rng 'default'
M = 7; % Number of observations
N = 5; % Number of variables observed
% Made-up data
X = rand(M,N);
% De-mean (MATLAB will de-mean inside of PCA, but I want the de-meaned values later)
X = X - mean(X); % Use X = bsxfun(@minus,X,mean(X)) if you have an older version of MATLAB
% Do the PCA
[coeff,score,latent,~,explained] = pca(X);
% Calculate eigenvalues and eigenvectors of the covariance matrix
covarianceMatrix = cov(X);
[V,D] = eig(covarianceMatrix);
% "coeff" are the principal component vectors.
% These are the eigenvectors of the covariance matrix.
% Compare "coeff" and "V". Notice that they are the same,
% except for column ordering and an unimportant overall sign.
coeff
coeff = 5×5
-0.5173 0.7366 -0.1131 0.4106 0.0919 0.6256 0.1345 0.1202 0.6628 -0.3699 -0.3033 -0.6208 -0.1037 0.6252 0.3479 0.4829 0.1901 -0.5536 -0.0308 0.6506 0.1262 0.1334 0.8097 0.0179 0.5571
V
V = 5×5
0.0919 0.4106 -0.1131 -0.7366 -0.5173 -0.3699 0.6628 0.1202 -0.1345 0.6256 0.3479 0.6252 -0.1037 0.6208 -0.3033 0.6506 -0.0308 -0.5536 -0.1901 0.4829 0.5571 0.0179 0.8097 -0.1334 0.1262
% Multiply the original data by the principal component vectors to get the
% projections of the original data on the principal component vector space.
% % This is also the output "score". Compare ...
dataInPrincipalComponentSpace = X*coeff
dataInPrincipalComponentSpace = 7×5
-0.5295 0.0362 0.5630 0.1053 -0.0428 0.2116 0.6573 -0.1721 -0.0306 -0.1559 0.6427 -0.0017 0.2739 -0.1635 0.2203 -0.6273 0.0239 -0.3678 -0.0710 0.2214 0.1332 0.0507 -0.0708 0.2772 0.0398 0.3145 -0.4825 -0.2080 0.1496 -0.0842 -0.1451 -0.2840 -0.0182 -0.2670 -0.1987
score
score = 7×5
-0.5295 0.0362 0.5630 0.1053 -0.0428 0.2116 0.6573 -0.1721 -0.0306 -0.1559 0.6427 -0.0017 0.2739 -0.1635 0.2203 -0.6273 0.0239 -0.3678 -0.0710 0.2214 0.1332 0.0507 -0.0708 0.2772 0.0398 0.3145 -0.4825 -0.2080 0.1496 -0.0842 -0.1451 -0.2840 -0.0182 -0.2670 -0.1987
% The columns of X*coeff are orthogonal to each other.
% This is shown with ...
corrcoef(dataInPrincipalComponentSpace)
ans = 5×5
1.0000 -0.0000 0.0000 -0.0000 -0.0000 -0.0000 1.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000 1.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 1.0000
% The variances of these vectors are the eigenvalues of the covariance matrix,
% and are also the output "latent". Compare these three outputs
var(dataInPrincipalComponentSpace)'
ans = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
latent
latent = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
sort(diag(D),'descend')
ans = 5×1
0.2116 0.1250 0.1009 0.0357 0.0286
The first figure on the wikipedia page for PCA is really helpful in understanding what is going on. There is variation along the original (x,y) axes. The superimposed arrows show the principal axes. The long arrow is the axis that has the most variation; the short arrow captures the rest of the variation.
Before thinking about dimension reduction, the first step is to redefine a coordinate system (x',y'), such that x' is along the first principal component, and y' along the second component (and so on, if there are more variables).
In my code above, those new variables are dataInPrincipalComponentSpace. As in the original data, each row is an observation, and each column is a dimension.
These data are just like your original data, except it is as if you measured them in a different coordinate system -- the principal axes.
Now you can think about dimension reduction. Take a look at the variable explained. It tells you how much of the variation is captured by each column of dataInPrincipalComponentSpace. Here is where you have to make a judgement call. How much of the total variation are you willing to ignore? One guideline is that if you plot explained, there will often be an "elbow" in the plot, where each additional variable explains very little additional variation. Keep only the components that add a lot more explanatory power, and ignore the rest.
In my code, notice that the first 3 components together explain 87% of the variation; suppose you decide that that's good enough. Then, for your later analysis, you would only keep those 3 dimensions -- the first three columns of dataInPrincipalComponentSpace. You will have 7 observations in 3 dimensions (variables) instead of 5.
I hope that helps!