Balancing between False Positives and False Negatives

Is the model that balances false positives and false negatives always the best? Not really. Let’s look at examples:

  • Fraud transaction detection: It’s not ok to miss out fraud transaction, but it’s ok to have higher False positives that makes the institute extra cautious to prevent frauds.

  • Customer identification for sales: If there’s an ML model that predicts a probable customer who could buy your product / service, to assist the sales team so that they can optimise their customer acquisition calling efforts, and say the staff is limited, then large FPs will be a problem as it will not help in reducing the sales teams effort (which was the prime objective)

  • ML model on stock trading calls: Balance out FPs and FNs. As FPs will mean loss, while FNs will mean missing out opportunities.

Moral of the topic: think throught the case study and then decide.