Solved – Interpreting discrepancies between R and SPSS with exploratory factor analysis

factor analysisrspss

I am a graduate student in computer science. I have been doing some exploratory factor analysis for a research project. My colleagues (who are leading the project) use SPSS, while I prefer to use R. This didn't matter until we discovered a major discrepancy between the two statistical packages.

We are using principal axis factoring as the extraction method (please note that I am well aware of the difference between PCA and factor analysis, and that we are not using PCA, at least not intentionally). From what I've read, this should correspond to "principal axis" method in R, and either "principal axis factoring" or "unweighted least squares" in SPSS, according to R documentation. We are using an oblique rotation method (specifically, promax) because we expect correlated factors, and are interpreting the pattern matrix.

Running the two procedures in R and SPSS, there are major differences. The pattern matrix gives different loadings. Although this gives more or less the same factor to variable relationships, there is up to a 0.15 difference between corresponding loadings, which seems more than would be expected by just a different implementation of the extraction method and promax rotations. However, that is not the most startling difference.

The cumulative variance explained by the factors is around 40% in the SPSS results, and 31% in the R results. This is a huge difference, and has led to my colleagues wanting to use SPSS instead of R. I have no problem with this, but a difference that big makes me think that we might be interpreting something incorrectly, which is a problem.

Muddying the waters even more, SPSS reports different types of explained variance when we run unweighted least squares factoring. The proportion of explained variance by Initial Eigenvalues is 40%, while the proportion of explained variance from Extraction Sums of Squared Loadings (SSL) is 33%. This leads me to think that the Initial Eigenvalues is not the appropriate number to look at (I suspect this is the variance explained before rotation, though which it's so big is beyond me). Even more confusing, SPSS also shows Rotation SSL, but does not calculate the percentage of explained variance (SPSS tells me that having correlated factors means I cannot add SSLs to find the total variance, which makes sense with the math I've seen). The reported SSLs from R do not match any of these, and R tells me that it describes 31% of the total variance. R's SSLs match the Rotation SSLs the most closely. R's eigenvalues from the original correlation matrix do match the Initial Eigenvalues from SPSS.

Also, please note that I have played around with using different methods, and that SPSS's ULS and PAF seem to match R's PA method the closest.

My specific questions:

  1. How much of a difference should I expect between R and SPSS with factor analysis implementations?
  2. Which of the Sums of Squared Loadings from SPSS should I be interpreting, Initial Eigenvalues, Extraction, or Rotation?
  3. Are there any other issues that I might have overlooked?

My calls to SPSS and R are as follows:

SPSS:

FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT INITIAL KMO AIC EXTRACTION ROTATION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION PROMAX(4).

R:

library(psych)
fa.results <- fa(data, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)

Best Answer

First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem.

Specifically you can run

FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.

to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running

r <- cor(data)

Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again:

fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)

If you still get different results in SPSS and R, the problem is not missing values-related.

Next, you can compare the results of the factor analysis/extraction method itself.

FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.

and

fa.results <- fa(r, nfactors=6, rotate="none", 
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)

Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on.

Now, to answer your three questions directly:

  1. In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.)
  2. Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych.
  3. This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data.