How to baffle problems like underfitting and overfitting. When do we encounter such problems?

Underfitting and Overfitting problem.

  • When we have less data, we might get an overfitted model.
  • When we try to perform overtraining, the model will overlearn which is against the theory of generalization.
  • If the model performs better on training data than testing data, we have got overfitting.
  • Because of the noise present in the data. Without removing it from the data, the model is built.
  • Points to baffle the problem of overfitting:
    • Cross-validation
    • Training with more data
    • Remove some features
    • Regularization
    • Early stopping: When you’re training a learning algorithm iteratively, you can measure how well each iteration of the model performs. Up until a certain number of iterations, new iterations improve the model. After that point, however, the model’s ability to generalize can weaken as it begins to overfit the training data. Early stopping refers to stopping the training process before the learner passes that point.
    • Ensemble learning