Journal article
Interpretation of Kappa and B statistics measures of agreement
Abstract
The Kappa statistic proposed by Cohen and the B statistic proposed by Bangdiwala are used to quantify the agreement between two observers, independently classifying the same n units into the same k categories. Both statistics correct for the agreement expected to result from chance alone, but the Kappa statistic is a measure that adjusts the observed proportion of agreement and ranges from- pc/(1- pc) to 1, where pc is the expected agreement …
Authors
Munoz SR; Bangdiwala SI
Journal
Journal of Applied Statistics, Vol. 24, No. 1, pp. 105–112
Publisher
Taylor & Francis
Publication Date
February 1997
DOI
10.1080/02664769723918
ISSN
0266-4763