First, there are different ways to construct so-called biplots in the case of correspondence analysis. In all cases, the basic idea is to find a way to show the best 2D approximation of the "distances" between row cells and column cells. In other words, we seek a hierarchy (we also speak of "ordination") of the relationships between rows and columns of a contingency table.
Very briefly, CA decomposes the chi-square statistic associated with the two-way table into orthogonal factors that maximize the separation between row and column scores (i.e. the frequencies computed from the table of profiles). Here, you see that there is some connection with PCA but the measure of variance (or the metric) retained in CA is the $\chi^2$, which only depends on column profiles (As it tends to give more importance to modalities that have large marginal values, we can also re-weight the initial data, but this is another story).
Here is a more detailed answer.
The implementation that is proposed in the corresp()
function (in MASS
) follows from a view of CA as an SVD decomposition of dummy coded matrices representing the rows and columns (such that $R^tC=N$, with $N$ the total sample). This is in light with canonical correlation analysis.
In contrast, the French school of data analysis considers CA as a variant of the PCA, where you seek the directions that maximize the "inertia" in the data cloud. This is done by diagonalizing the inertia matrix computed from the centered and scaled (by marginals frequencies) two-way table, and expressing row and column profiles in this new coordinate system.
If you consider a table with $i=1,\dots,I$ rows, and $j=1,\dots,J$ columns, each row is weighted by its corresponding marginal sum which yields a series of conditional frequencies associated to each row: $f_{j|i}=n_{ij}/n_{i\cdot}$. The marginal column is called the mean profile (for rows). This gives us a vector of coordinates, also called a profile (by row). For the column, we have $f_{i|j}=n_{ij}/n_{\cdot j}$. In both cases, we will consider the $I$ row profiles (associated to their weight $f_{i\cdot}$) as individuals in the column space, and the $J$ column profiles (associated to their weight $f_{\cdot j}$) as individuals in the row space. The metric used to compute the proximity between any two individuals is the $\chi^2$ distance. For instance, between two rows $i$ and $i'$, we have
$$
d^2_{\chi^2}(i,i')=\sum_{j=1}^J\frac{n}{n_{\cdot j}}\left(\frac{n_{ij}}{n_{i\cdot}}-\frac{n_{i'j}}{n_{i'\cdot}} \right)^2
$$
You may also see the link with the $\chi^2$ statistic by noting that it is simply the distance between observed and expected counts, where expected counts (under $H_0$, independence of the two variables) are computed as $n_{i\cdot}\times n_{\cdot j}/n$ for each cell $(i,j)$. If the two variables were to be independent, the row profiles would be all equal, and identical to the corresponding marginal profile. In other words, when there is independence, your contingency table is entirely determined by its margins.
If you realize an PCA on the row profiles (viewed as individuals), replacing the euclidean distance by the $\chi^2$ distance, then you get your CA. The first principal axis is the line that is the closest to all points, and the corresponding eigenvalue is the inertia explained by this dimension. You can do the same with the column profiles. It can be shown that there is a symmetry between the two approaches, and more specifically that the principal components (PC) for the column profiles are associated to the same eigenvalues than the PCs for the row profiles. What is shown on a biplot is the coordinates of the individuals in this new coordinate system, although the individuals are represented in a separate factorial space. Provided each individual/modality is well represented in its factorial space (you can look at the $\cos^2$ of the modality with the 1st principal axis, which is a measure of the correlation/association), you can even interpret the proximity between elements $i$ and $j$ of your contingency table (as can be done by looking at the residuals of your $\chi^2$ test of independence, e.g. chisq.test(tab)$expected-chisq.test(tab)$observed
).
The total inertia of your CA (= the sum of eigenvalues) is the $\chi^2$ statistic divided by $n$ (which is Pearson's $\phi^2$).
Actually, there are several packages that may provide you with enhanced CAs compared to the function available in the MASS
package: ade4, FactoMineR, anacor, and ca.
The latest is the one that was used for your particular illustration, and a paper was published in the Journal of Statistical Software that explains most of its functionnalities: Correspondence Analysis in R, with Two- and Three-dimensional Graphics: The ca Package.
So, your example on eye/hair colors can be reproduced in many ways:
data(HairEyeColor)
tab <- apply(HairEyeColor, c(1, 2), sum) # aggregate on gender
tab
library(MASS)
plot(corresp(tab, nf=2))
corresp(tab, nf=2)
library(ca)
plot(ca(tab))
summary(ca(tab, nd=2))
library(FactoMineR)
CA(tab)
CA(tab, graph=FALSE)$eig # == summary(ca(tab))$scree[,"values"]
CA(tab, graph=FALSE)$row$contrib
library(ade4)
scatter(dudi.coa(tab, scannf=FALSE, nf=2))
In all cases, what we read in the resulting biplot is basically (I limit my interpretation to the 1st axis which explained most of the inertia):
- the first axis highlights the clear opposition between light and dark hair color, and between blue and brown eyes;
- people with blond hair tend to also have blue eyes, and people with black hair tend to have brown eyes.
There is a lot of additional resources on data analysis on the bioinformatics lab from Lyon, in France. This is mostly in French, but I think it would not be too much a problem for you. The following two handouts should be interesting as a first start:
Finally, when you consider a full disjonctive (dummy) coding of $k$ variables, you get the multiple correspondence analysis.
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-)
I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking status classes and vice versa.
The problem with the above figure is the axis scaling and the fact that you can't display all the relationships (chi-square distance between regions and chi-square distance between smoking status) on the one figure. By the looks of it, the figure is using a what is known as symmetric scaling which has been shown to be a good compromise preserving as much of the information in the sets of scores as possible.
I'm not familiar with the ca
package but I am with the vegan package and it's cca
function:
require(vegan)
df <- data.frame(df)
ord <- cca(df)
plot(ord, scaling = 3)
The last plot is a bit easier to read than the one you show but AFAICT they are the same (or at least similarly scaled).
So I would say that occasional smokers are lower in number than expected in QC, BC and AB, and most associated with ON, but that in all regions, occasional smokers are low in number - they differ markedly from the expected number.
However, there is a single dominant "gradient" or axis of variation in these data and as the second axis represents so little variation, I would likely not interpret this component at all.
Best Answer
SVD
Singular-value decomposition is at the root of the three kindred techniques. Let $\bf X$ be $r \times c$ table of real values. SVD is $\bf X = U_{r\times r}S_{r\times c}V_{c\times c}'$. We may use just $m$ $[m \le\min(r,c)]$ first latent vectors and roots to obtain $\bf X_{(m)}$ as the best $m$-rank approximation of $\bf X$: $\bf X_{(m)} = U_{r\times m}S_{m\times m}V_{c\times m}'$. Further, we'll notate $\bf U=U_{r\times m}$, $\bf V=V_{c\times m}$, $\bf S=S_{m\times m}$.
Singular values $\bf S$ and their squares, the eigenvalues, represent scale, also called inertia, of the data. Left eigenvectors $\bf U$ are the coordinates of the rows of the data onto the $m$ principal axes; while right eigenvectors $\bf V$ are the coordinates of the columns of the data onto those same latent axes. The entire scale (inertia) is stored in $\bf S$ and so the coordinates $\bf U$ and $\bf V$ are unit-normalized (column SS=1).
Principal Component Analysis by SVD
In PCA, it is agreed upon to consider rows of $\bf X$ as random observations (which can come or go), but to consider columns of $\bf X$ as fixed number of dimensions or variables. Hence it is appropriate and convenient to remove the effect of the number of rows (and only rows) on the results, particularly on the eigenvalues, by svd-decomposing of $\mathbf Z=\mathbf X/\sqrt{r}$ instead of $\bf X$. Note that this corresponds to eigen-decomposition of $\mathbf {X'X}/r$, $r$ being the sample size
n
. (Often, mostly with covariances - to make them unbiased - we'll prefer to divide by $r-1$, but it is a nuance.)The multiplication of $\bf X$ by a constant affected only $\bf S$; $\bf U$ and $\bf V$ remain to be the unit-normalized coordinates of rows and of columns.
From here and everywhere below we redefine $\bf S$, $\bf U$ and $\bf V$ as given by svd of $\bf Z$, not of $\bf X$; $\bf Z$ being a normalized version of $\bf X$, and the normalization varies between types of analysis.
By multiplying $\mathbf U\sqrt{r}=\bf U_*$ we bring the mean square in the columns of $\bf U$ to 1. Given that rows are random cases to us, it is logical. We've thus obtained what is called in PCA standard or standardized principal component scores of observations, $\bf U_*$. We do not do the same thing with $\bf V$ because variables are fixed entities.
We then can confer rows with all the inertia, to obtain unstandardized row coordinates, also called in PCA raw principal component scores of observations: $\bf U_*S$. This formula we'll call "direct way". The same result is returned by $\bf XV$; we'll label it "indirect way".
Analogously, we can confer columns with all the inertia, to obtain unstandardized column coordinates, also called in PCA the component-variable loadings: $\bf VS'$ [may ignore transpose if $\bf S$ is square], - the "direct way". The same result is returned by $\bf Z'U$, - the "indirect way". (The above standardized principal component scores can also be computed from the loadings as $\bf X(AS^{-1/2})$, where $\bf A$ are the loadings.)
Biplot
Consider biplot in a sense of a dimensionality reduction analysis on its own, not simply as "a dual scatterplot". This analysis is very similar to PCA. Unlike PCA, both rows and columns are treated, symmetrically, as random observations, which means that $\bf X$ is being seen as a random two-way table of varying dimensionality. Then, naturally, normalize it by both $r$ and $c$ before svd: $\mathbf Z=\mathbf X/\sqrt{rc}$.
After svd, compute standard row coordinates as we did it in PCA: $\mathbf U_*=\mathbf U\sqrt{r}$. Do the same thing (unlike PCA) with column vectors, to obtain standard column coordinates: $\mathbf V_*=\mathbf V\sqrt{c}$. Standard coordinates, both of rows and of columns, have mean square 1.
We may confer rows and/or columns coordinates with inertia of eigenvalues like we do it in PCA. Unstandardized row coordinates: $\bf U_*S$ (direct way). Unstandardized column coordinates: $\bf V_*S'$ (direct way). What's about the indirect way? You can easily deduce by substitutions that the indirect formula for the unstandardized row coordinates is $\mathbf {XV_*}/c$, and for the unstandardized column coordinates is $\mathbf {X'U_*}/r$.
PCA as a particular case of Biplot. From the above descriptions you probably learned that PCA and biplot differ only in how they normalize $\bf X$ into $\bf Z$ which is then decomposed. Biplot normalizes by both the number of rows and the number of columns; PCA normalizes only by the number of rows. Consequently, there is a little difference between the two in the post-svd computations. If in doing biplot you set $c=1$ in its formulas you will get exactly PCA results. Thus, biplot can be seen as a generic method and PCA as a particular case of biplot.
[Column centering. Some user may say: Stop, but doesn't PCA require also and first of all the centering of the data columns (variables) in order it to explain variance? While biplot may not do the centering? My answer: only PCA-in-narrow-sense does the centering and explains variance; I'm discussing linear PCA-in-general-sense, PCA which explains some sort sum of squared deviations from the origin chosen; you might choose it to be the data mean, the native 0 or whatever you like. Thus, the "centering" operation isn't what could distinguish PCA from biplot.]
Passive rows and columns
In biplot or PCA, you can set some rows and/or columns to be passive, or supplementary. Passive row or column does not influence the SVD and therefore does not influence the inertia or the coordinates of other rows/columns, but receives its coordinates in the space of principal axes produced by the active (not passive) rows/columns.
To set some points (rows/columns) to be passive, (1) define $r$ and $c$ be the number of active rows and columns only. (2) Set to zero passive rows and columns in $\bf Z$ before svd. (3) Use the "indirect" ways to compute coordinates of passive rows/columns, since their eigenvector values will be zero.
In PCA, when you compute component scores for new incoming cases with the help of loadings obtained on old observations (using the score coefficient matrix), you actually doing the same thing as taking these new cases in PCA and keeping them passive. Similarly, to compute correlations/covariances of some external variables with the component scores produced by a PCA is equivalent to taking those variables in that PCA and keeping them passive.
Arbitrary spreading of inertia
The column mean squares (MS) of standard coordinates are 1. The column mean squares (MS) of unstandardized coordinates are equal to the inertia of the respective principal axes: all the inertia of eigenvalues was donated to eigenvectors to produce the unstandardized coordinates.
In biplot: row standard coordinates $\bf U_*$ have MS=1 for each principal axis. Row unstandardized coordinates, also called row principal coordinates $\mathbf {U_*S} = \mathbf {XV_*}/c$ have MS = corresponding eigenvalue of $\bf Z$. The same is true for column standard and unstandardized (principal) coordinates.
Generally, it is not required that one endows coordinates with inertia either in full or in none. Arbitrary spreading is allowed, if needed for some reason. Let $p_1$ be the proportion of inertia which is to go to rows. Then the general formula of row coordinates is: $\bf U_*S^{p1}$ (direct way) = $\mathbf {XV_*S^{p1-1}}/c$ (indirect way). If $p_1=0$ we get standard row coordinates, whereas with $p_1=1$ we get principal row coordinates.
Likewise $p_2$ be the proportion of inertia which is to go to columns. Then the general formula of column coordinates is: $\bf V_*S^{p2}$ (direct way) = $\mathbf {X'U_*S^{p2-1}}/r$ (indirect way). If $p_2=0$ we get standard column coordinates, whereas with $p_2=1$ we get principal column coordinates.
The general indirect formulas are universal in that they allow to compute coordinates (standard, principal or in-between) also for the passive points, if there are any.
If $p_1+p_2=1$ they say the inertia is distributed between row and column points. The $p_1=1,p_2=0$, i.e. row-principal-column-standard, biplots are sometimes called "form biplots" or "row-metric preservation" biplots. The $p_1=0,p_2=1$, i.e. row-standard-column-principal, biplots are often called within PCA literature "covariance biplots" or "column-metric preservation" biplots; they display variable loadings (which are juxtaposed to covariances) plus standardized component scores, when applied within PCA.
In correspondence analysis, $p_1=p_2=1/2$ is often used and is called "symmetric" or "canonical" normalization by inertia - it allows (albeit at some expence of euclidean geometric strictness) compare proximity between row and column points, like we can do on multidimensional unfolding map.
Correspondence Analysis (Euclidean model)
Two-way (=simple) correspondence analysis (CA) is biplot used to analyze a two-way contingency table, that is, a non-negative table which entries bear the meaning of some sort of affinity between a row and a column. When the table is frequencies chi-square model correspondence analysis is used. When the entries is, say, means or other scores, a simplier Euclidean model CA is used.
Euclidean model CA is just the biplot described above, only that the table $\bf X$ is additionally preprocessed before it enters the biplot operations. In particular, the values are normalized not only by $r$ and $c$ but also by the total sum $N$.
The preprocessing consists of centering, then normalizing by the mean mass. Centering can be various, most often: (1) centering of columns; (2) centering of rows; (3) two-way centering which is the same operation as computation of frequency residuals; (4) centering of columns after equalizing column sums; (5) centering of rows after equalizing row sums. Normalizing by the mean mass is dividing by the mean cell value of the initial table. At preprocessing step, passive rows/columns, if exist, are standardized passively: they are centered/normalized by the values computed from active rows/columns.
Then usual biplot is done on the preprocessed $\bf X$, starting from $\mathbf Z=\mathbf X/\sqrt{rc}$.
Weighted Biplot
Imagine that the activity or importance of a row or a column can be any number between 0 and 1, and not only 0 (passive) or 1 (active) as in the classic biplot discussed so far. We could weight the input data by these row and column weights and perform weighted biplot. With weighted biplot, the greater is the weight the more influential is that row or that column regarding all the results - the inertia and the coordinates of all the points onto the principal axes.
The user supplies row weights and column weights. These and those are first normalized separately to sum to 1. Then the normalization step is $\mathbf{Z_{ij} = X_{ij}}\sqrt{w_i w_j}$, with $w_i$ and $w_j$ being the weights for row i and column j. Exactly zero weight designates the row or the column to be passive.
At that point we may discover that classic biplot is simply this weighted biplot with equal weights $1/r$ for all active rows and equal weights $1/c$ for all active columns; $r$ and $c$ the numbers of active rows and active columns.
Perform svd of $\bf Z$. All operations are the same as in classic biplot, the only difference being that $w_i$ is in place of $1/r$ and $w_j$ is in place of $1/c$. Standard row coordinates: $\mathbf {U_{*i}=U_i}/\sqrt{w_i}$ and standard column coordinates: $\mathbf {V_{*j}=V_j}/\sqrt{w_j}$. (These are for rows/columns with nonzero weight. Leave values as 0 for those with zero weight and use the indirect formulas below to obtain standard or whatever coordinates for them.)
Give inertia to coordinates in the proportion you want (with $p_1=1$ and $p_2=1$ the coordinates will be fully unstandardized, or principal; with $p_1=0$ and $p_2=0$ they will stay standard). Rows: $\bf U_*S^{p1}$ (direct way) = $\bf X[Wj]V_*S^{p1-1}$ (indirect way). Columns: $\bf V_*S^{p2}$ (direct way) = $\bf ([Wi]X)'U_*S^{p2-1}$ (indirect way). Matrices in brackets here are the diagonal matrices of the column and the row weights, respectively. For passive points (that is, with zero weights) only the indirect way of computation is suited. For active (positive weights) points you may go either way.
PCA as a particular case of Biplot revisited. When considering unweighted biplot earlier I mentioned that PCA and biplot are equivalent, the only difference being that biplot sees columns (variables) of the data as random cases symmetrically to observations (rows). Having extended now biplot to more general weighted biplot we may once again claim it, observing that the only difference is that (weighted) biplot normalizes the sum of column weights of input data to 1, and (weighted) PCA - to the number of (active) columns. So here is the weighted PCA introduced. Its results are proportionally identical to those of weighted biplot. Specifically, if $c$ is the number of active columns, then the following relationships are true, for weighted as well as classic versions of the two analyses:
Correspondence Analysis (Chi-square model)
This is technically a weighted biplot where weights are being computed from a table itself rather then supplied by the user. It is used mostly to analyze frequency cross-tables. This biplot will approximate, by euclidean distances on the plot, chi-square distances in the table. Chi-square distance is mathematically the euclidean distance inversely weighted by the marginal totals. I will not go further in details of Chi-square model CA geometry.
The preprocessing of frequency table $\bf X$ is as follows: divide each frequency by the expected frequency, then subtract 1. It is the same as to first obtain the frequency residual and then to divide by the expected frequency. Set row weights to $w_i=R_i/N$ and column weights to $w_j=C_j/N$, where $R_i$ is the marginal sum of row i (active columns only), $C_j$ is the marginal sum of column j (active rows only), $N$ is the table total active sum (the three numbers come from the initial table).
Then do weighted biplot: (1) Normalize $\bf X$ into $\bf Z$. (2) The weights are never zero (zero $R_i$ and $C_j$ are not allowed in CA); however you can force rows/columns to become passive by zeroing them in $\bf Z$, so their weights are ineffective at svd. (3) Do svd. (4) Compute standard and inertia-vested coordinates as in weighted biplot.
In Chi-square model CA as well as in Euclidean model CA using two-way centering one last eigenvalue is always 0, so the maximal possible number of principal dimensions is $\min(r-1,c-1)$.
See also a nice overview of chi-square model CA in this answer.
Illustrations
Here is some data table.
Several dual scatterplots (in 2 first principal dimensions) built on analyses of these values follow. Column points are connected with the origin by spikes for visual emphasis. There were no passive rows or columns in these analyses.
The first biplot is SVD results of the data table analyzed "as is"; the coordinates are the row and the column eigenvectors.
Below is one of possible biplots coming from PCA. PCA was done on the data "as is", without centering the columns; however, as it is adopted in PCA, normalization by the number of rows (the number of cases) was done initially. This specific biplot displays principal row coordinates (i.e. raw component scores) and principal column coordinates (i.e. variable loadings).
Next is biplot sensu stricto: The table was initially normalized both by the number of rows and the number of columns. Principal normalization (inertia spreading) was used for both row and column coordinates - as with PCA above. Note the similarity with the PCA biplot: the only difference is due to the difference in the initial normalization.
Chi-square model correspondence analysis biplot. The data table was preprocessed in the special manner, it included two-way centering and a normalization using marginal totals. It is a weighted biplot. Inertia was spread over the row and the column coordinates symmetrically - both are halfway between "principal" and "standard" coordinates.
The coordinates displayed on all these scatterplots: