I am interested in exploring how different characteristics of national pension systems are related to each other. I have used MCA for a dataset in which the rows are countries and the columns are different features of pension systems. However, I am not sure how to interpret the distances between points in the SPSS Joint Plot of Category Points. Using a symmetrical normalization, do the distances between points representing categories of different variables say something about how these categories are associated? Does a shorter distance mean a higher level of association?
Solved – Interpreting multiple correspondence analysis
correspondence-analysisinterpretationspss
Related Solutions
First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances" between row cells and column cells. In other words, we seek a hierarchy (we also speak of "ordination") of the relationships between rows and columns of a contingency table.
Very briefly, CA decomposes the chi-square statistic associated with the two-way table into orthogonal factors that maximize the separation between row and column scores (i.e. the frequencies computed from the table of profiles). Here, you see that there is some connection with PCA but the measure of variance (or the metric) retained in CA is the $\chi^2$, which only depends on column profiles (As it tends to give more importance to modalities that have large marginal values, we can also re-weight the initial data, but this is another story).
Here is a more detailed answer.
The implementation that is proposed in the corresp()
function (in MASS
) follows from a view of CA as an SVD decomposition of dummy coded matrices representing the rows and columns (such that $R^tC=N$, with $N$ the total sample). This is in light with canonical correlation analysis.
In contrast, the French school of data analysis considers CA as a variant of the PCA, where you seek the directions that maximize the "inertia" in the data cloud. This is done by diagonalizing the inertia matrix computed from the centered and scaled (by marginals frequencies) two-way table, and expressing row and column profiles in this new coordinate system.
If you consider a table with $i=1,\dots,I$ rows, and $j=1,\dots,J$ columns, each row is weighted by its corresponding marginal sum which yields a series of conditional frequencies associated to each row: $f_{j|i}=n_{ij}/n_{i\cdot}$. The marginal column is called the mean profile (for rows). This gives us a vector of coordinates, also called a profile (by row). For the column, we have $f_{i|j}=n_{ij}/n_{\cdot j}$. In both cases, we will consider the $I$ row profiles (associated to their weight $f_{i\cdot}$) as individuals in the column space, and the $J$ column profiles (associated to their weight $f_{\cdot j}$) as individuals in the row space. The metric used to compute the proximity between any two individuals is the $\chi^2$ distance. For instance, between two rows $i$ and $i'$, we have
$$ d^2_{\chi^2}(i,i')=\sum_{j=1}^J\frac{n}{n_{\cdot j}}\left(\frac{n_{ij}}{n_{i\cdot}}-\frac{n_{i'j}}{n_{i'\cdot}} \right)^2 $$
You may also see the link with the $\chi^2$ statistic by noting that it is simply the distance between observed and expected counts, where expected counts (under $H_0$, independence of the two variables) are computed as $n_{i\cdot}\times n_{\cdot j}/n$ for each cell $(i,j)$. If the two variables were to be independent, the row profiles would be all equal, and identical to the corresponding marginal profile. In other words, when there is independence, your contingency table is entirely determined by its margins.
If you realize an PCA on the row profiles (viewed as individuals), replacing the euclidean distance by the $\chi^2$ distance, then you get your CA. The first principal axis is the line that is the closest to all points, and the corresponding eigenvalue is the inertia explained by this dimension. You can do the same with the column profiles. It can be shown that there is a symmetry between the two approaches, and more specifically that the principal components (PC) for the column profiles are associated to the same eigenvalues than the PCs for the row profiles. What is shown on a biplot is the coordinates of the individuals in this new coordinate system, although the individuals are represented in a separate factorial space. Provided each individual/modality is well represented in its factorial space (you can look at the $\cos^2$ of the modality with the 1st principal axis, which is a measure of the correlation/association), you can even interpret the proximity between elements $i$ and $j$ of your contingency table (as can be done by looking at the residuals of your $\chi^2$ test of independence, e.g. chisq.test(tab)$expected-chisq.test(tab)$observed
).
The total inertia of your CA (= the sum of eigenvalues) is the $\chi^2$ statistic divided by $n$ (which is Pearson's $\phi^2$).
Actually, there are several packages that may provide you with enhanced CAs compared to the function available in the MASS
package: ade4, FactoMineR, anacor, and ca.
The latest is the one that was used for your particular illustration, and a paper was published in the Journal of Statistical Software that explains most of its functionnalities: Correspondence Analysis in R, with Two- and Three-dimensional Graphics: The ca Package.
So, your example on eye/hair colors can be reproduced in many ways:
data(HairEyeColor)
tab <- apply(HairEyeColor, c(1, 2), sum) # aggregate on gender
tab
library(MASS)
plot(corresp(tab, nf=2))
corresp(tab, nf=2)
library(ca)
plot(ca(tab))
summary(ca(tab, nd=2))
library(FactoMineR)
CA(tab)
CA(tab, graph=FALSE)$eig # == summary(ca(tab))$scree[,"values"]
CA(tab, graph=FALSE)$row$contrib
library(ade4)
scatter(dudi.coa(tab, scannf=FALSE, nf=2))
In all cases, what we read in the resulting biplot is basically (I limit my interpretation to the 1st axis which explained most of the inertia):
- the first axis highlights the clear opposition between light and dark hair color, and between blue and brown eyes;
- people with blond hair tend to also have blue eyes, and people with black hair tend to have brown eyes.
There is a lot of additional resources on data analysis on the bioinformatics lab from Lyon, in France. This is mostly in French, but I think it would not be too much a problem for you. The following two handouts should be interesting as a first start:
Finally, when you consider a full disjonctive (dummy) coding of $k$ variables, you get the multiple correspondence analysis.
I feel, from your description of the task, that it can be solved by means of quantifying (turning ordinal-level into scale-level) variables the way that the predictions or associatiations are maximized. This is known as optimal scaling and is implemented in SPSS (and, of course, in R, I believe). You might choose between 3 procedures, all adopting optimal scaling:
- Categorical PCA (CATPCA, or PRINCALS). Use this if you want PCA or Factor analysis. The procedure itself is PCA, not Factor analysis in narrow sense of the word implying communalities. If you need Factor analysis per se you may input the quantified variables obtained in CATPCA to standard Factor analysis procedure. Having identified components or factors behind your independent "resource" variables and having obtained the factor scores you could then check their effect on each dependent variable via ordinal regression (for example).
- Categorical Canonical Correlation analysis (OVERALS). Use this to draw latent "traits" which are loaded simultaneously by both independed and dependent sets of variables. You might want to read something about canonical correlations if you are not familiar with it.
- Categorical regression (CATREG). This is OVERALS in case one of the two sets of variables contains just one variable. Use it if you want to model effects of your independent variables on each dependent variable separately. It is like usual linear regression, only that it is nonlinear because of optimal scaling.
Best Answer
The standard visualisation is the biplot. The interpretation depends on the details of the technique applied but will usually lean on some notion of inner product. But since I don't know what SPSS does when you ask for MCA then I hesitate to offer more concrete advice. Nevertheless you'll surely find all you need to interpret them in the (free) book Biplots in Practice, specifically chapters 9-10.
However, if you're wondering how to interpret its output then you might profitably first revise your theory of correspondence analysis. Greenacre's CA in Practice is a good applied text. Ch. 9 covers biplots and ch. 16-20 revise the multi-way extensions of simple correspondence analysis (they are short chapters). That should provide enough background to see what SPSS is offering you.
As @ttnphns points out, a two way table implies simple rather than multiple correspondence analysis. Then things are indeed easier (but still see references above).