Precision Vs Recall Vs F1 Score

Accuracy score is not an ideal metric when classes are skewed or not of equal importance. So, we need to optimize a slightly different version of that.:bulb:

:small_blue_diamond:Precision: is the number of True Positives (TP) divided by the total number of positive classifications by the model (TP + FP). In essence, Precision defines how precisely the model is out of the total predicted positives. Precision is good when the cost of False Positives (FP) is high.

:small_blue_diamond:Recall: is the number of TP divided by the total number of actual positives (TP + FN). In essence, Recall tells how correctly the model is predicting the actual positives. Recall is a good measure when the cost of False Negatives is high.

:small_blue_diamond:F1 Score: is a balance between Precision and Recall. It takes the harmonic mean of the two and gives a balanced measure, which is better than accuracy. :white_check_mark:

This is because the F1 score will be pulled down if any of the precision or recall value is low. For example, when Precision is 100% and Recall is 0%, the F1-score will be 0%, and not 50%.

But can we blindly go for F1 always? No.:thinking:

Defining metrics completely depends on the domain and the data. More on this in tomorrow’s post!:zap:

#datascience #machinelearning