There are basically three reasons for Dimensionality reduction:
- Time and Space Complexity
Let’s understand this with an example:
Imagine we have worked on an MNIST dataset that contains 28 × 28 images and when we convert images to features we get 784 features.
If we try to think of each feature as one dimension, then how can we think of 784 dimensions in our mind?
We are not able to visualize the scattering of points of 784 dimensions.
That is the first reason why Dimensionality Reduction is Important!
Let’s say you are a data scientist and you have to explain your model to clients who do not understand Machine Learning, how will you make them understand the working of 784 features or dimensions.
In simple language, how we interpret the model to the clients.
That is the second reason why Dimensionality Reduction is Important!
Let’s say you are working for an internet-based company where the output of something must be in milliseconds or less than that, so “Time complexity” and “Space Complexity” matter a lot. More features need more Time which these types of companies can’t afford.
That is the third reason why Dimensionality Reduction is Important!