Solved – Interrater reliability in SPSS

agreement-statisticsreliabilityspss

I am trying to calculate interrater reliability in SPSS for both pre and post test of the same measure that is administered as part of a prison intake program. The measure has 20 items and then a total score. I am just running on total score agreement.

I ran crosstabs and calculated kappa for the pretest and correlations. Is that all I need to do? Is there more that would be helpful to report for publication?

Also, I have one rater that I suspect is a problem. I also want to use this information for training raters. How can I identify how error associates with an individual rater?

If you could tell me your thoughts for SPSS, that would be helpful. I am learning R, but am currently completely R illiterate.

Thanks for any information.

Best Answer

If you are looking at inter-rater reliability on the total scale scores (and you should be), then Kappa would not be appropriate. If you have two raters for the pre-test and two for the post-test, then a correlation would be informative. If you have more than two raters, computing the ICC (intraclass correlation) from the SPSS RELIABILITY procedure would be appropriate.