Glossary of statistical terms
Kappa statistic is a generic term for several similar measures of agreement used with categorical data . Typically it is used in assessing the degree to which two or more raters, examining the same data, agree when it comes to assigning the data to categories. for example, kappa might be used to assess the extent to which (1) radiologist analysis of an x-ray, (2) computer analysis of the same x-ray, and (3) biopsy agree in labeling a growth "malignant" or "benign."
Suppose each object in a group of M objects is assigned to one of n categories. The categories are at nominal scale . For each object, such assignments are done by k raters.
The kappa measure of agreement is the ratio
Complete agreement corresponds to K = 1 , and lack of agreement (i.e. purely random coincidences of rates) corresponds to K = 0 . A negative values of kappa would mean negative agreement - that is, the propensity of raters to avoid assignments made by other raters.
Want to learn more about this topic?
Statistics.com offers over 100 courses in statistics from introductory to advanced level. Most are 4 weeks long and take place online in series of weekly lessons and assignments, requiring about 15 hours/week. Participate at your convenience; there are no set times when you must to be online. Ask questions and exchange comments with the instructor and other students on a private discussion board throughout the course.
This course covers the analysis of data gathered in surveys.
This course will cover the analysis of contingency table data (tabular data in which the cell entries represent counts of subjects or items falling into certain categories). Topics include tests for independence (comparing proportions as well as chi-square), exact methods, and treatment of ordered data. Both 2-way and 3-way tables are covered.