Skip to content

ROC Curve

For example, consider fraudulent insurance claims (1’s) and non-fraudulent ones (0’s).  The ROC curve plots two quantities:

  • Recall (called sensitivity in medical statistics):  The proportion of 1’s (fraudulent claims) the model correctly identifies; plotted on the y-axis
  • Specificity:  The proportion of 0’s (non-fraudulent claims) the model correctly identifies (plotted on the x-axis, 1 on the left and 0 on the right)

Specifically, the model ranks all the records by probability of being a 1, with the most probable 1’s at the top.  To plot the curve, proceed through the ranked records and, at each record, calculate cumulative recall and specificity to that point.  A very well-performing model will catch lots of 1’s before it starts misidentifying 0’s as 1’s – it will hug the upper-left corner of the plot.

The area under the curve is a measure of the model’s overall discriminatory power.  The closer the curve lies to the upper left corner, the closer the AUC is to 1, and the greater the discriminatory power.  The diagonal line represents a completely ineffective model – no better than random guessing.

One big shortcoming of the AUC metric is that it ignores asymmetric misclassification costs, which are most salient in the problem of identifying rare cases.  For example, failure to identify a purchaser in a direct marketing campaign costs the company much more than it must pay in sending an offer to a misclassified non-purchaser.

Notes:

1.  The ROC curve was first used during World War II to describe the performance of radar receiving stations, whose job was to correctly identify (classify) reflected radar signals, and alert defense forces to incoming aircraft.

2.  Often the x-axis plots 1-specificity, with 0 on the left and 1 on the right.