Explain boosting in Data Science

One of the ensemble learning strategies is boosting. Unlike bagging, it is not a strategy for training our models in parallel. In boosting, we generate many models and train them sequentially by merging weak models iteratively in such a manner that the training of a new model is dependent on the training of previous models. When training the new model, we use the patterns learned by the prior model and test them on a dataset. We assign extra weight to observations in the dataset that are wrongly handled or forecasted by prior models in each iteration. Boosting can also be used to reduce model bias.