First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem.
Specifically you can run
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running
r <- cor(data)
Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again:
fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
If you still get different results in SPSS and R, the problem is not missing values-related.
Next, you can compare the results of the factor analysis/extraction method itself.
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
and
fa.results <- fa(r, nfactors=6, rotate="none",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on.
Now, to answer your three questions directly:
- In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.)
- Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych.
- This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data.
Covariance matrix of the data being singular means that some variables in your data set are linear functions of one another. Most typically, this is a full set of dummy variables corresponding to a categorical factor. You put categorical data into your tags, but you did not describe how exactly it shows up in your EFA. Technically speaking, categorical data violates assumptions of EFA (multivariate normal data), so you will probably need to modify your analysis somehow.
The error message, however, speaks of a somewhat poor implementation of EFA. There are EFA methods that can get away with a degenerate matrix, although of course it makes life harder for the methods that rely on inverses and determinants of the covariance matrix of the observed variables. A better implementation should crank through it with a warning.
Best Answer
Kaiser-Meyer Olkin (KMO) model tests sampling adequacy by measuring the proportion of variance in the items that may be common variance. Values ranging between .80 and 1.00 indicate sampling adequacy (Cerny & Kaiser, 1977).
Bartlett’s test of sphericity examines whether a correlation matrix is significantly different to the identity matrix, in which diagonal elements are unities and all off-diagonal elements are zeros (Bartlett, 1950). Significant results indicate that variables in the correlation matrix are suitable for factor analysis.
Before providing a concise summary of the aforementioned fit statistics, it is worth noting that there are different classifications of fit indices, but one popular classification distinguishes between absolute fit indices and comparative fit indices.
Classification of fit indices: Absolute and Comparative
The logic behind absolute fit indices is essentially to test how well the model specified by the researcher reproduces the observed data. Commonly used absolute fit statistics include the $\chi^2$ fit statistic, RMSEA, SRMR.
In contrast, comparative fit indices are based on a different logic, i.e. they assess how well a model specified by a researcher fits the observed sample data relative to a null model (i.e., a model that is based on the assumption that all observed variables are not correlated) (Miles & Shevlin, 2007). Popular comparative model fit indices are the CFI and TLI.
The $\chi^2$ fit statistic
The $\chi^2$ measures the discrepancy between the observed and the implied covariance matrices.
The $\chi^2$ fit statistic is very popular and frequently reported in both CFA and SEM studies.
However, it is notoriously sensitive to large sample sizes and increased model complexity (i.e. models with a large number of indicators and degrees of freedom). Therefore, the current practice is to report it mostly for historical reasons, and it rarely used to make decisions about the adequacy of model fit.
The RMSEA
The Root Mean Square Error of Approximation (RMSEA) provides information as to how well the model, with unknown but optimally chosen parameter estimates, would fit the population covariance matrix (Byrne, 1998).
It is a very commonly used fit statistic.
One of its key advantages is that the RMSEA calculates confidence intervals around its value.
Values below $.060$ indicate close fit (Hu & Bentler, 1999). Values up to $.080$ are commonly accepted as adequate.
The SRMR
The Standardized Root Mean Residual (SRMR) is the square root of the difference between the residuals of the sample covariance matrix and the hypothesized covariance model.
As SRMR is standardized, its values range between $0$ and $1$. Commonly, models with values below $.05$ threshold are considered to indicate good fit (Byrne, 1998). Also, values up to $.08$ are acceptable (Hu & Bentler, 1999).
The CFI and TLI
Two comparative fit indices commonly reported are the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). The indices are similar; however, note that the CFI is normed while the TLI is not. Therefore, the CFI’s values range between zero and one, whereas the TLI’s values may fall below zero or be above one (Hair et al., 2013).
For CFI and TLI values above .95 are indicative of good fit (Hu & Bentler, 1999). In practice, CFI and TLI values from $.90$ to $.95$ are considered acceptable.
Note that the TLI is non-normed, so its values can go above $1.00$
EDIT:
References
Aichholzer, J. (2014). Random intercept EFA of personality scales. Journal of Research in Personality, 53, 1-4.
Bartlett, M. S. (1950). Tests of significance in factor analysis. British Journal of Statistical Psychology, 3(2), 77-85.
Byrne, B.M. (1998). Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic Concepts, Applications and Programming. Mahwah, NJ: Lawrence Erlbaum Associates.
Cerny, B. A., & Kaiser, H. F. (1977). A study of a measure of sampling adequacy for factor-analytic correlation matrices. Multivariate Behavioural Research, 12(1), 43–47.
Hair, R. D., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2013). Multivariate data analysis. Englewood Cliffs, NJ: Prentice–Hall.
Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53-60.
Hoyle, R. H. (2012). Handbook of structural equation modeling. London: Guilford Press.
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling, 6(1), 1–55.
Miles, J. & Shevlin, M. (2007). A time and a place for incremental fit indices. Personality and Individual Differences, 42(5), 869-74.