RT Journal Article SR Electronic T1 A practical application of analysing weighted kappa for panels of experts and EQA schemes in pathology JF Journal of Clinical Pathology JO J Clin Pathol FD BMJ Publishing Group Ltd and Association of Clinical Pathologists SP 257 OP 260 DO 10.1136/jcp.2010.086330 VO 64 IS 3 A1 Wright, Karen C A1 Harnden, Patricia A1 Moss, Sue A1 Berney, Dan M A1 Melia, Jane YR 2011 UL http://jcp.bmj.com/content/64/3/257.abstract AB Background Kappa statistics are frequently used to analyse observer agreement for panels of experts and External Quality Assurance (EQA) schemes and generally treat all disagreements as total disagreement. However, the differences between ordered categories may not be of equal importance (eg, the difference between grades 1 vs 2 compared with 1 vs 3). Weighted kappa can be used to adjust for this when comparing a small number of readers, but this has not as yet been applied to the large number of readers typical of a national EQA scheme.Aim To develop and validate a method for applying weighted kappa to a large number of readers within the context of a real dataset: the UK National Urological Pathology EQA Scheme for prostatic biopsies.Methods Data on Gleason grade recorded by 19 expert readers were extracted from the fixed text responses of 20 cancer cases from four circulations of the EQA scheme. Composite kappa, currently used to compute an unweighted kappa for large numbers of readers, was compared with the mean kappa for all pairwise combinations of readers. Weighted kappa generalised for multiple readers was compared with the newly developed ‘pairwise-weighted’ kappa.Results For unweighted analyses, the median increase from composite to pairwise kappa was 0.006 (range −0.005 to +0.052). The difference between the pairwise-weighted kappa and generalised weighted kappa for multiple readers never exceeded ±0.01.Conclusion Pairwise-weighted kappa is a suitable and highly accurate approximation to weighted kappa for multiple readers.