The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data, you can use the weighted $\kappa$, which basically reads as usual $\kappa$ with off-diagonal elements contributing to the measure of agreement. Fleiss (3) provided guidelines to interpret $\kappa$ values but these are merely rules of thumbs.
The $\kappa$ statistic is asymptotically equivalent to the ICC estimated from a two-way random effects ANOVA, but significance tests and SE coming from the usual ANOVA framework are not valid anymore with binary data. It is better to use bootstrap to get confidence interval (CI). Fleiss (8) discussed the connection between weighted kappa and the intraclass correlation (ICC).
It should be noted that some psychometricians don't very much like $\kappa$ because it is affected by the prevalence of the object of measurement much like predictive values are affected by the prevalence of the disease under consideration, and this can lead to paradoxical results.
Inter-rater reliability for $k$ raters can be estimated with Kendall’s coefficient of concordance, $W$. When the number of items or units that are rated $n > 7$, $k(n − 1)W \sim \chi^2(n − 1)$. (2, pp. 269–270). This asymptotic approximation is valid for moderate value of $n$ and $k$ (6), but with less than 20 items $F$ or permutation tests are more suitable (7). There is a close relationship between Spearman’s $\rho$ and Kendall’s $W$ statistic: $W$ can be directly calculated from the mean of the pairwise Spearman correlations (for untied observations only).
Polychoric (ordinal data) correlation may also be used as a measure of inter-rater agreement. Indeed, they allow to
- estimate what would be the correlation if ratings were made on a continuous scale,
- test marginal homogeneity between raters.
In fact, it can be shown that it is a special case of latent trait modeling, which allows to relax distributional assumptions (4).
About continuous (or so assumed) measurements, the ICC which quantifies the proportion of variance attributable to the between-subject variation is fine. Again, bootstraped CIs are recommended. As @ars said, there are basically two versions -- agreement and consistency -- that are applicable in the case of agreement studies (5), and that mainly differ on the way sum of squares are computed; the “consistency” ICC is generally estimated without considering the Item×Rater interaction. The ANOVA framework is useful with specific block design where one wants to minimize the number of ratings (BIBD) -- in fact, this was one of the original motivation of Fleiss's work. It is also the best way to go for multiple raters. The natural extension of this approach is called the Generalizability Theory. A brief overview is given in Rater Models: An Introduction, otherwise the standard reference is Brennan's book, reviewed in Psychometrika 2006 71(3).
As for general references, I recommend chapter 3 of Statistics in Psychiatry, from Graham Dunn (Hodder Arnold, 2000). For a more complete treatment of reliability studies, the best reference to date is
Dunn, G (2004). Design and Analysis of
Reliability Studies. Arnold. See the
review in the International Journal
of Epidemiology.
A good online introduction is available on John Uebersax's website, Intraclass Correlation and Related Methods; it includes a discussion of the pros and cons of the ICC approach, especially with respect to ordinal scales.
Relevant R packages for two-way assessment (ordinal or continuous measurements) are found in the Psychometrics Task View; I generally use either the psy, psych, or irr packages. There's also the concord package but I never used it. For dealing with more than two raters, the lme4 package is the way to go for it allows to easily incorporate random effects, but most of the reliability designs can be analysed using the aov()
because we only need to estimate variance components.
References
- J Cohen. Weighted kappa: Nominal scale agreement with provision for scales disagreement of partial credit. Psychological Bulletin, 70, 213–220, 1968.
- S Siegel and Jr N John Castellan. Nonparametric Statistics for the Behavioral
Sciences. McGraw-Hill, Second edition, 1988.
- J L Fleiss. Statistical Methods for Rates and Proportions. New York: Wiley, Second
edition, 1981.
- J S Uebersax. The tetrachoric and polychoric correlation coefficients. Statistical Methods for Rater Agreement web site, 2006. Available at: http://john-uebersax.com/stat/tetra.htm. Accessed February 24, 2010.
- P E Shrout and J L Fleiss. Intraclass correlation: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428, 1979.
- M G Kendall and B Babington Smith. The problem of m rankings. Annals of Mathematical Statistics, 10, 275–287, 1939.
- P Legendre. Coefficient of concordance. In N J Salkind, editor, Encyclopedia of Research Design. SAGE Publications, 2010.
- J L Fleiss. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613-619, 1973.
Here's a couple of ways to think about.
1
A) You could treat each full sequence of codings as a ordered set of events (i.e. ["head nod", "head shake", "head nod", "eyebrow raised"] and ["head nod", "head shake", "eyebrow raised"]), then align the sequences using an algorithm that made sense to you ( http://en.wikipedia.org/wiki/Sequence_alignment ). You could then compute inter coder reliability for the entire sequence.
B) Then, again using the aligned sequences, you could compare when they said an event happened, given that they both observed the event.
2)
Alternately, you could model this as a Hidden Markov Model, and use something like the Baumn-Welch algorithm to impute the probabilities that, given some actual event, each coder actually coded the data correctly. http://en.wikipedia.org/wiki/Baum-Welch_algorithm
Best Answer
If you are looking at inter-rater reliability on the total scale scores (and you should be), then Kappa would not be appropriate. If you have two raters for the pre-test and two for the post-test, then a correlation would be informative. If you have more than two raters, computing the ICC (intraclass correlation) from the SPSS RELIABILITY procedure would be appropriate.