Explain Precision and Recall

Precision and recall are two numbers which together are used to evaluate the performance of classification or information retrieval systems. Precision is defined as the fraction of relevant instances among all retrieved instances. Recall, sometimes referred to as ‘sensitivity, is the fraction of retrieved instances among all relevant instances. A perfect classifier has precision and recall both equal to 1.

It is often possible to calibrate the number of results returned by a model and improve precision at the expense of recall, or vice versa.

Precision and recall should always be reported together. Precision and recall are sometimes combined together into the F-score, if a single numerical measurement of a system’s performance is required.

Precision and Recall Formulas

Mathematical definition of precision

Mathematical definition of recall

Precision and Recall Formula Symbols Explained

The true positive rate, that is the number of instances which are relevant and which the model correctly identified as relevant.
The false positive rate, that is the number of instances which are not relevant but which the model incorrectly identified as relevant.
The false negative rate, that is the number of instances which are relevant and which the model incorrectly identified as not relevant.