What is Confusion Matrix?

The confusion matrix is ​​a very useful tool to assess how good a classification model based on machine learning is. It is also known as an error matrix and can be presented as a summary table to evaluate the performance of a classification model. The number of correct and incorrect predictions are summarized with the count values ​​and broken down by each class.

The confusion matrix serves to show explicitly when one class is confused with another, which allows us to work separately with different types of errors.

Structure of a 2×2 Confusion Matrix

Positive (P): The observation is positive (for example, it is a dog)

Negative (N): The observation is not positive (for example, it is not a dog)

True Positive (TP): Result in which the model correctly predicts the positive class

True Negative (TN): Result where the model correctly predicts the negative class

False Positive (FP): Also called a type 1 error, a result where the model incorrectly predicts the positive class when it is actually negative

False Negative (FN): Also called a type 2 error, a result in which the model incorrectly predicts the negative class when it is actually positive