Problem with the AU-ROC curve.
- Are sensitivity and specificity equally important?
- Confidence scales may be inconsistent and unreliable
- Prevalence of abnormality.
- Clinical comprehension and relevance.
To read more:- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4356897/
ROC is a trade-off between True Positive Rate(How much correctly classified as 1) and False Positive Rate (How much incorrectly classified as 1).
ROC Plot creates for all the possible cut off rates and analyzes the curve closer to the top left corner which indicates a better performance of classifier/Model.
AUC is the Area Under the Curve in the ROC plot which has to be measured and analyzed. AUC=1 means all positive examples come after your negative example. AUC = 0 means all negative examples come after your positive example. AUC=0.5 means a random classifier.