Feature Scaling📊

The values in the numerical features may not always be on the same scale. So, your ML model, which is not as smart as you might have thought it would be, will give more importance to the features in higher scales as they contain higher values. This would result in a poor model.

To tackle this, we scale the features between a fixed range so that the model doesn’t get biased towards some high values. This preserves the variance and squeezes the range of values to a narrow range. Some of the most used scalers in sklearn are: MinMaxScaler, StandardScaler & Normalizer.

:bulb:MinMaxScaler: subtracts the minimum value of the features and then divides by the range, where range is the difference between the original maximum and original minimum. It preserves the shape of the original distribution, with default range in 0-1.

:bulb:StandardScaler: standardizes a feature by subtracting the mean and then scaling to unit variance, where unit variance means dividing all the values by the standard deviation. It makes the mean of the distribution 0 and standard deviation as 1.

:bulb:Normalizer: works not on the columns, but on the rows. L2 normalization is applied to each observation so the that the values in a row have a unit norm after scaling.

#machinelearning #datascience