**Parameter sharing:** In convolutions, we share the parameters while convolving through the input. The intuition behind this is that a feature detector, which is useful in one part of the image may also be useful in another part of the image. So, by using a single filter we convolved all the entire input and hence the parameters are shared.

Let’s understand this with an example,

If we would have used just the fully connected layer, the number of parameters would be = **32 3232828*6**, which is nearly equal to 14 million which makes no sense.

But in the case of a convolutional layer, the number of parameters will be = (5*5 + 1) * 6 (if there are 6 filters), which is equal to **156**. Convolutional layers, therefore, reduce the number of parameters and speed up the training of the model significantly.

**The sparsity of Connections:** This implies that for each layer, each output value depends on a small number of inputs, instead of taking into account all the inputs.