It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
- Naive Bayes classifiers are linear classifiers based on Bayes’ theorem. The model generated is probabilistic. It estimates conditional probability which is the probability that something will happen, given that something else has already occurred. For example, the given mail is likely spam given the appearance of the words such as ‘prize’.
- It is called naive due to the assumption that the features in the dataset are mutually independent. In the real-world, the independence assumption is often violated, but naive Bayes classifiers still tend to perform very well.
- The idea is to factor all available evidence in form of predictors into the naive Bayes rule to obtain a more accurate probability for class prediction. Being relatively robust, easy to implement, fast, and accurate, naive Bayes classifiers are used in many different fields.
- Event: Outcome of an experiment.
- Experiment: Process performed to understand possible outcomes.
- Sample Space: Set of all outcomes of an experiment.
- Probability: Chance of a particular event taking place.
- Joint Probability: It is the probability of multiple events occurring together.
- From a pack of cards, what is the probability that the card is a red king?
- Conditional Probability: It is invalid if the joint probability is zero.
- From a pack of cards, what is the probability that the card is a king given that it is red?