Skip to content
Statistics logo
  • Courses
    • See All Courses
    • Calendar
    • Intro stats for college credit
    • Faculty
    • Group training
    • Credit & Credentialing
    • Teach With Us
  • Programs/Degrees
    • Certificates
      • Analytics for Data Science
      • Biostatistics
      • Programming For Data Science – Python (Experienced)
      • Programming For Data Science – Python (Novice)
      • Programming For Data Science – R (Experienced)
      • Programming For Data Science – R (Novice)
      • Social Science
    • Skillsets
      • Bayesian Statistics
      • Business Analytics
      • Healthcare Analytics
      • Marketing Analytics
      • Operations Research
      • Predictive Analytics
      • Python Analytics
      • R Programming Analytics
      • Rasch & IRT
      • Spatial Statistics
      • Survey Analysis
      • Text Mining Analytics
    • Undergraduate Degree Programs
    • Graduate Degree Programs
  • Partnerships
    • Higher Education
    • Enterprise
  • Resources
    • About Us
    • Blog
    • Word Of The Week
    • Newsletter signup
    • Glossary
    • Statistical Symbols
    • FAQs & Knowledge Base
    • Testimonials
    • Test Yourself
  • Student Login

Kappa Statistic

Kappa Statistic:

Kappa statistic is a generic term for several similar measures of agreement used with categorical data . Typically it is used in assessing the degree to which two or more raters, examining the same data, agree when it comes to assigning the data to categories. for example, kappa might be used to assess the extent to which (1) radiologist analysis of an x-ray, (2) computer analysis of the same x-ray, and (3) biopsy agree in labeling a growth "malignant" or "benign."

Suppose each object in a group of M objects is assigned to one of n categories. The categories are at nominal scale . For each object, such assignments are done by k raters.

The kappa measure of agreement is the ratio

 

K =   P(A) - P(E)


1 - P(E)

where P(A) is the proportion of times the k raters agree, and P(E) is the proportion of times the k raters are expected to agree by chance alone.

Complete agreement corresponds to K = 1 , and lack of agreement (i.e. purely random coincidences of rates) corresponds to K = 0 . A negative values of kappa would mean negative agreement - that is, the propensity of raters to avoid assignments made by other raters.

See also Cohen´s Kappa , Weighted Kappa

Browse Other Glossary Entries

 

 

Courses Using This Term

Categorical Data Analysis
This course will teach you the analysis of contingency table data. Topics include tests for independence, comparing proportions as well as chi-square, exact methods, and treatment of ordered data. Both 2-way and 3-way tables are covered.
Survey Analysis
This course will teach you how to analyze data gathered in surveys.
Return to Glossary Search

About Statistics.com

Statistics.com offers academic and professional education in statistics, analytics, and data science at beginner, intermediate, and advanced levels of instruction. Statistics.com is a part of Elder Research, a data science consultancy with 25 years of experience in data analytics.

Our Links

  • Contact Us
  • Site Map
  • Explore Courses
  • About Us
  • Management Team
Menu
  • Contact Us
  • Site Map
  • Explore Courses
  • About Us
  • Management Team

Social Networks

Linkedin-in Twitter Facebook-f Youtube

Contact

The Institute for Statistics Education
2107 Wilson Blvd
Suite 850 
Arlington, VA 22201
(571) 281-8817

ourcourses@statistics.com

  • Contact Us
  • Site Map
  • Explore Courses
  • About Us
  • Management Team

© Copyright 2022 - Statistics.com, LLC | All Rights Reserved | Privacy Policy | Terms of Use

By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.

Accept