ROC Curve and AUC

The ROC (receiver operating characteristic) the performance plot for binary classifiers of True Positive Rate (y-axis) vs. False Positive Rate (x-axis).

AUC is the area under the ROC curve, and it’s a common performance metric for evaluating binary classification models.

It’s equivalent to the expected probability that a uniformly drawn random positive is ranked before a uniformly drawn random negative.

**AUC**

The area under the curve (AUC) can be used as a summary of the model skill when comparing your models’ performance. The higher the AUC the better the model to separate classes.

**ROC**

ROC Curves summarize the trade-off between the true positive rate and false positive rate for a predictive model using different probability thresholds. ROC curves are appropriate when the observations are balanced between each class, whereas precision-recall curves are appropriate for imbalanced datasets.