Explain Regularization in detail?

Regularization refers to techniques that are used to calibrate machine learning models in order to minimize the adjusted loss function and prevent overfitting or underfitting. Using Regularization, we can fit our machine learning model appropriately on a given test set and hence reduce the errors in it.

The commonly used regularization techniques are :

  1. Ridge regression
  2. Lasso regression
  • A regression model which uses L1 Regularization technique is called LASSO(Least Absolute Shrinkage and Selection Operator) regression.
  • A regression model that uses L2 regularization technique is called Ridge regression.
  • Lasso Regression adds “absolute value of magnitude” of coefficient as penalty term to the loss function(L).

Ridge Regression

  • Ridge regression is one of the types of linear regression in which a small amount of bias is introduced so that we can get better long-term predictions.
  • Ridge regression is a regularization technique, which is used to reduce the complexity of the model. It is also called as L2 regularization.
  • In this technique, the cost function is altered by adding the penalty term to it. The amount of bias added to the model is called Ridge Regression penalty. We can calculate it by multiplying with the lambda to the squared weight of each individual feature.

Lasso Regression:

  • Lasso regression is another regularization technique to reduce the complexity of the model. It stands for Least Absolute and Selection Operator.
  • It is similar to the Ridge Regression except that the penalty term contains only the absolute weights instead of a square of weights.
  • Since it takes absolute values, hence, it can shrink the slope to 0, whereas Ridge Regression can only shrink it near to 0.