The human brain loosely influences the information-processing paradigm of neural networks.
Neurons in the brain are closely linked, and chemical signals are communicated via synapses between axons and dendrites.
The human brain consists of 100 billion neurons connected to 10,000 others.
Artificial neural networks use weights and activation functions (such as sigmoids) to communicate signals (numbers) and activate neurons.
These networks modify the weights to solve a problem using a training procedure.
A single perceptron with three inputs: a weight for each information, an input bias, and output is depicted in the graphic below.
The output is calculated by adding the input and weight products together and passing the bias via an activation function.
Before diving deeper into back-propagation, I look at the humble perceptron as a starting point.
Today’s neural networks can have a variety of topologies, including sparsely connected, fully connected, recurrent (with cycles), and other architectures.
Let’s have a look at the evolution of neural networks.