Why L1 regularizations causes parameter sparsity whereas L2 regularization does not?

Why L1 regularizations causes parameter sparsity whereas L2 regularization does not?

L1 regularization leads to solutions to the optimization problem where many of the variables have value 0.
In other words, L1 regularization leads to sparsity.
Read the following article where things are explained visually.

Regularization in Machine Learning: Connect the dots