Should we trust AI?

Hello Everyone,

How do you decide if a decision you/someone made was a good decision? Do you just check and see if the outcome was favorable and good? I would hope not.

With this simple logic, if I wanted to get more energy, eating a handful of candy would be a great means to do so since it is cheap and quick (not to mention tasty). Although the initial outcome is the outcome I was looking for, the overall outcome is actually a burst of energy for less than an hour, an energy crash, a possible upset stomach, and more fat to work off that ironically requires energy to do so. Hence, a favorable easy outcome that was actually a bad decision.

Trust is an important aspect of how we humans interact with one another. According to Psychology Today, here are a few key facets that can define trust:

  1. Trust is a set of behaviors, such as acting in ways that depend on another.
  2. Trust is a belief in a probability that a person will behave in certain ways.
  3. Trust is an abstract mental attitude toward a proposition that someone is dependable .

At the end of the day, AI is becoming more popular than ever. The use-cases are broad and adaptive, and the benefits are incalculable. However, as a byproduct, we are therefore also trusting AI more, with or without knowing it. Unfortunately, we have not been as rigorous with trusting AI as we tend to be with our fellow human peers. We must be however if we want AI systems that can be trusted and depended on.

The worst-case scenario for any business when using AI is that new data is inputted that the AI cannot predict well on. If we blindly trust in our AI, it will go on predicting as it pleases until someone notices the vastly incorrect predictions it is outputting. For small enterprises, the financial/social dent may be small due to incorrect predictions. However, for larger enterprises, the effects could be devastating.

Therefore, it is important that we constantly check our AI systems for biases in addition to understanding how our model is actually working on the inside. This is where Explainable AI (XAI) and libraries such as LIME or SHAP come in, which we will discuss in a future blog.

Please do share your thoughts about this whole idea of putting our trust in AI, would love to hear what the community thinks.