What is Classification Model Performance Parameter?

Classification Performance report evaluates the quality of a classification model. It works both for binary and multi-class classification. If you have a probabilistic classification, refer to a separate report. This report can be generated for a single model, or as a comparison. The key classification metrics: Accuracy, Recall, Precision, and F1- Score.

There are many ways for measuring classification performance. Accuracy, confusion matrix, log-loss, and AUC-ROC are some of the most popular metrics. Precision-recall is a widely used metrics for classification problems.

A confusion Matrix is a performance measurement for machine learning classification problems where the output can be two or more classes. It is a table with combinations of predicted and actual values.

A confusion matrix is defined as thetable that is often used to describe the performance of a classification model on a set of the test data for which the true values are known.

  • True Positive: We predicted positive and it’s true. In the image, we predicted that a woman is pregnant and she actually is.
  • True Negative: We predicted negative and it’s true. In the image, we predicted that a man is not pregnant and he actually is not.
  • False Positive (Type 1 Error)- We predicted positive and it’s false. In the image, we predicted that a man is pregnant but he actually is not.
  • False Negative (Type 2 Error)- We predicted negative and it’s false. In the image, we predicted that a woman is not pregnant but she actually is.