Solved – Determining the number of raters for inter-rater reliability

agreement-statistics

I intend to ask a number of experts to match five out of a possible ten team roles (recorder, chairman and so on) to five distinct stages of a project.

I cannot seem to find any source for determining how many raters I need in order to achieve sufficient inter-rater reliability. Does anyone know of a reliable source for calculating the number of raters needed?

Best Answer

Usually there are only 2 raters in interrater reliability (although there can be more). You don't get higher reliability by adding more raters: Interrarter reliability is usually measure by either Cohen's $\kappa$ or a correlation coefficient.

You get higher reliability by having either better items or better raters.

The Wikipedia entry is not a bad place to start, and it has good references.

You may be confusing number of raters with number of items.

Related Question