Advantages:
- Random Forest is unbiased as we train multiple decision trees and each tree is trained on a subset of the same training data.
- It is very stable since if we introduce the new data points in the dataset, then it does not affect much as the new data point impacts one tree, and is pretty hard to impact all the trees.
- Also, it works well when you have both categorical and numerical features in the problem statement.
- It performs very well, with missing values in the dataset.
Disadvantages:
- Complexity is the major disadvantage of this algorithm. More computational resources are required and also results in a large number of decision trees combined together.
- Due to their complexity, training time is more compared to other algorithms.