What are multilayer networks?

The complexity of the problems that can be solved with neural networks rose as more layers of neurons were added.

A similar technique is used in deep learning today. More layers (depth) are added with new ideas to handle even more complicated and varied issues.

Hidden layers are significant because they allow features from the input layer to be extracted. However, the number of hidden layers (and neurons in each layer) depends on the situation.

If a hidden layer has too many neurons, the network will overfit and memorize the input patterns, limiting the network’s capacity to generalize.
If the hidden layer contains too few neurons, the network will not represent input-space information and be limited in its ability to generalize.

In general, the better the network, the smaller it is (fewer neurons and weights).

The perceptron model is analogous to running a network with numerous layers. Inputs are fed into the hidden layer via weights, while hidden layer outputs are sent into the output layer via weights.

In a winner-take-all system, the output can represent many features or, as I illustrate in the next section, a single feature (where the giant output neuron is the winner).